Stuff I worked on/2016-07-12T13:34:00-04:00Technical Interview Notes2016-07-12T13:34:00-04:00Leonard Chantag:,2016-07-12:technical-interview-notes.html<h2>Sorting</h2>
<ul>
<li><strong>Divide and conquer</strong>: Splitting a problem into smaller problems similar to the initial until a point is reached where the problem is small enough that it can be solved on its own. Aftwerards, the solution is propogated back up to help solve the higher problem.</li>
<li><strong>Inplace</strong>: Elements in the array are swapped when sorted instead of having a new sorted array returned.</li>
</ul>
<h3>Merge Sort</h3>
<p>Divide and conquer sorting algorithm that sorts by splitting up an array into smaller chunks to be sorted. The chunks are split halfway until they are each of size 1. In this case, the chunks are already sorted. After dividing, pairs of sorted arrays are joined together, sorting them as they are merged until there is only one chunk left that is the final sorted array.</p>
<ul>
<li>Best, average, and worst case time complexities are all O(nlog(n)). The complexity of the merging step is O(nlog(n)) since each merge involves iterating through both chunks in each pair, which grows with the length of the original array (n) and the number of merges decreases by a factor of 2 since half the number of chunks to merge remain after each process of merging.</li>
<li>The space complexity is O(n) since the sorted elements need to be stored in some intermediary when merging. This number increases based on the length of the initial array. As a result, the algorithm is also not inplace.</li>
</ul>
<h3>Quicksort</h3>
<p>Divide and conquer algorithm that sorts very similarly to merge sort, but uses a pivot and wall to swap elements in place, avoiding having to create temporary arrays when merging. Quicksort works by partitioning the array into two partitions based on a pivot such that everything to the left of the pivot is less than it, and everything to the right of the pivot is greater than it. The new position of the pivot after the partitioning is kept track of by an incrementing wall/counter for the start of the array. The pivot itself is an arbitrary element in the array.</p>
<p>The partitioning works by iterating through all the elements of the array, swapping elements that are less than the pivot with the element immmediately to the left of the wall (which starts at index 0). Aftwerwards the wall inc incremented by 1, moving the wall to the right of the newly formed partition. After iterating through all elements, the pivot is swapped with the current element at the wall. At this point, the array is divided into 3 sections: the left partion of elements less than the pivot, the pivot itself, and the right partion of elements greater than (or equal to) the pivot.</p>
<p>The algorithm continues by applying this partitioning and sorting on the newly formed left and right partitions until the partions are of size 1, in which case, they are sorted. After having gone through all sub-partitions, the entire array has been sorted inplace.</p>
<ul>
<li>The main action done in this algorithm behind the sorting is the swapping of elements, which has a constant space and time complexity since swapping just involves having a single temporary variable.</li>
<li>The wort case time complexity is <span class="math">\(O(n^2)\)</span> where the partitions are very imbalanced, causing either the left or right partion to always be of size 0. In this case, the division step creates a new partition whose length is just 1 less the previous which must still be iterated over to produce another partition. Both the iteration in each partition, and the number of partitions made increase with the length of the array, n. This scenario occurs in the case where either the smallest or largest elements in the array are chosen as the pivot for all partions since all other elements must be to the left of the partition if the max is chosen and to the right of the partition if the min is always chosen. To make sure the min and max aren't always selected, the median of the first handful of elements in the array is used as the pivot.</li>
<li>The average and best case time complexities are both O(nlog(n)). For best case, this occurs if the sub-partitions created are of equal size since the size of each sub partiion decreases by a factor of 2 each time. Always happenning to select the median of the partition can lead to this. The average case involves complex math, so see <a href="https://www.khanacademy.org/computing/computer-science/algorithms/quick-sort/a/analysis-of-quicksort">Khan Academy</a> for a better explanation.</li>
<li>The space compelxity is constant since the array is sorted inplace and all the swaps just use 1 temporary variable.</li>
</ul>
<h2>Hash Tables</h2>
<p>Hash tables (also known as dictionaries, maps, associative arrays) are abstract data types where each value is associated with a unique key. The elements themselves are actually stored in an array. The element index/offset at which these elements are stored is the result of a hash function for a given key. This hash function is responsible for converting a given data type to an integer representation of it for this offset.</p>
<h3>Hash Function</h3>
<p>The hash function itself should be implemented in such a way that the hash for a variable can be calculated quickly (near constant time), and each hash produced should be unique for every unique key. As a result, most of the time, hash functions involve some complicated math involving prime numbers, and coming up with a universal hash function that works for nearly all data types is impossible. (See <a href="https://github.com/python/cpython/blob/2.7/Objects/stringobject.c">python's implementation for string hashing</a> as an example of how hashing works. I tend to use python's hashing functions when implementing my own hash tables.) If a hash function produces a hash whose value is at least the length of the array, the hash hash is modulo'd with the length of the array to produce an index that can be used on this array.</p>
<h3>Collisions</h3>
<p>Regardless of how well the hashing algorithm is, there is a possibility that two keys could produce the same hash, in which case, if an element already occupies the space for that hash, the element will still have to be stored somehow (it should not be dropped.) </p>
<p>In the event collisions occur, the hash function should be able to produce a uniform distribution of hashes, allowing for all elements to be accessed in an equal amount of time. A bad hash function produces a distribution of hashes such that a majority of the resulting hashes are the same or close to each other, resuliting in a distribution with peaks. If the hashes produced for a wide range of keys is the same, lookup is similar to lookup in a list.</p>
<p>Two ways to handle keys with duplicate hashes are through separate chaining and open addressing:</p>
<ul>
<li>Separate chaining involves having each element in the hash table be a list (typically a linked list), and just appending an element at this hash to the linked list. This way, values for keys with duplicate hashes can still be stored at these hashes.</li>
<li>Open addressing involves placing values in the next available empty space in the array. If the space for a given hash is already occupied, the next available space is selected according to some probe sequence which returns the next hash to use for a given hash. Common ones include linear probing which just increments the hash by 1 until an empty space is found. Quadratic probing involves incrementing the hash by the square of the kth iteration into the probing function.</li>
</ul>
<p>Both operations involve iterating over some sequence when collisions occur, effectively scaling up lookup linearly. In order to prevent collisions, some hash table implementations will dynamically resize the array to allow for more hashes to be stored. This assumes that the collisions are primarly a result of modulo-ing against the length of the array and not a result of the distribution of hashes formed by the funciton itself. I believe python's <a href="https://github.com/PiJoules/cpython-modified/blob/master/Objects/dictobject.c">dict implementation</a> uses a combination of open addressing and resizing the array if 2/3 of the array is occupied.</p>
<h3>Operations</h3>
<h4>Lookup</h4>
<p>This involves checking if an element for a given key exists in the hash table, which is essentially just running the key through the hash function. If chaining or open addressing is implemented, in order to retrive the proper element for a given key, the value of the key is also compared against against any subsequent value in the list or probe sequence, making the time complexity for lookup linear in worst case. This can be avaided though by resizing the array if a certain capacity threshold is reached to reduce the number of collisions.</p>
<h4>Insertion/Updating</h4>
<p>This involves inserting a value for a given key at an index in the array. If open addressing or chaining is implemented, and a collision occurs, the value is instead placed at the end of the list or the next avaialble spot found through probing.</p>
<h4>Deletion</h4>
<p>This varies depending on implementation. From a high level perspective, deleting the element could just mean setting the value at the hash to be NULL and decreasing a counter for the length of the array by 1. If separate chaining is implemented, deletion on the list implementation will take place for the given key and may involve iterating over the whole list. If open addressing is implemented, you will need to replace the deleted with a marker indicating the offset of the next element that should be checked as a result of probing.</p>
<h3>Complexity</h3>
<p>The main benefits behind hash tables are constant lookup, insertion, and deletion time. In the worst case scenario, when a bad hash funciton is used, the number of values for a given key can increase linearly, effectively making lookup time complexity that of whatever you implemented to ammend collision, but this can be countered by resizing the hash table. For average and best case scenarios, these operations are effectively done in constant time.</p>
<p>The cost of the hash table though is the amount of space needed. In order to support a large number of hashes, a large array will be needed to store all the elements. Objects in python are actually implemented as dictionaries in the underlying C code, and since everything in python is an object, this is one of the reasons for why python programs typically use much more memory than other languages.</p>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
var location_protocol = (false) ? 'https' : document.location.protocol;
if (location_protocol !== 'http' && location_protocol !== 'https') location_protocol = 'https:';
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = location_protocol + '//cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
mathjaxscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'AMS' }, Macros: {} }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Adding Custom Builtin Functions to the Python Interpreter2016-06-14T14:03:00-04:00Lenard Chantag:,2016-06-14:adding-custom-builtin-functions-to-the-python-interpreter.html<p>When I have a few minutes of spare time, like if I’m on the bus heading to work or waiting for class to start, one of the things I like to do is explore the cpython source code. I want to learn how the Python interpreter works as a way to become a better Python programmer (or just a better programmer in general), and perhaps someday contribute to the development of Python in later releases.</p>
<h1>Objective</h1>
<p>One of the first things I did to get more familiar with the source is add my own custom builtin functions to the interpreter. I’m not talking about making a new function in python like <code>def func(args)</code>. I’m also not talking about making an extension module in C that can be imported in python like <code>from my_c_module import my_c_func</code>. I’m talking about adding a new one of <a href="https://docs.python.org/3/library/functions.html">these functions</a>: functions like <code>sum()</code>, <code>abs()</code>, and <code>range()</code> that are automatically built into the langauge and do not require manually importing anything to use. I figured this would be a good way to learn more about the source since I’d be learning how the core functions, which are part of the source, are implemented and used.</p>
<p>The function I will be adding for this example is a simple product function which will take the product of a list of numbers, similar to sum, but with multiplying instead of adding and a starting value of 1. Now, I know the idea of a product function has been <a href="http://bugs.python.org/issue1093">proposed before and rejected due to low demand for the function</a>, and there are <a href="http://stackoverflow.com/questions/595374/whats-the-python-function-like-sum-but-for-multiplication-prod">many existing ways to impliment this function in python</a>, but this is just for learning purposes.</p>
<h2>Python Version</h2>
<p>The version of python I am also working with is the first alpha version of Python 3.6.0 (more specifically, <a href="https://docs.python.org/dev/">3.6.0a1</a>). The reason I am working with python 2 instead of 3 is because nearly all active development is in python 3, and the latest version of python 2 at the time (2.7.11) is only receiving bug fixes and backporting from python 3. <a href="https://www.python.org/dev/peps/pep-0404/">There is a whole PEP dedicated to why Python 2.8 will never be a thing</a>. You know it’s serious because the PEP number is 404. The reason I am also working with an alpha version of python 3 instead of the latest release at the time (3.5.1) is because I accidentally cloned the cpython repo from github without asserting first that the master branch contained the latest release version instead of development version.<br />
¯\_(ツ)_/¯</p>
<h1>How it works</h1>
<p>Now, I kind of lied when I mentioned not making a C extension. It turns out that adding a builtin function is a lot like making a C extension for python. There are already lots of articles and documentation on the internet about making C extensions, especially <a href="http://dan.iel.fm/posts/python-c-extensions/">this one</a>, so I will not go into great detail about making one from scratch, just how it gets added and what changes I made to the source.</p>
<p>When you normally make a C extension, you add a <code>PyMethodDef</code> struct to an array of PyMethodDef structs which represent the functions you would like to expose in python space. These PyMethodDef structs is given as:</p>
<pre><code class="C">struct PyMethodDef {
const char *ml_name; /* The name of the built-in function/method */
PyCFunction ml_meth; /* The C function that implements it */
int ml_flags; /* Combination of METH_xxx flags, which mostly describe the args expected by the C func */
const char *ml_doc; /* The __doc__ attribute, or NULL */
};
</code></pre>
<p>Hopefully the comments in this code, taken straight from the source, are enough to explain the individual members of the struct.</p>
<p>Inside <code>Python/bltinmodule.c</code>, the array of PyMethodDef structs this gets added to is:</p>
<pre><code class="C">static PyMethodDef builtin_methods[] = {
{"__build_class__", (PyCFunction)builtin___build_class__, METH_VARARGS | METH_KEYWORDS, build_class_doc},
{"__import__", (PyCFunction)builtin___import__, METH_VARARGS | METH_KEYWORDS, import_doc},
BUILTIN_ABS_METHODDEF
BUILTIN_ALL_METHODDEF
…
{"vars", builtin_vars, METH_VARARGS, vars_doc},
{NULL, NULL},
};
</code></pre>
<p>In the source, some of the array elements are given as literal structs while others like <code>BUILTIN_ABS_METHODDEF</code> are implemented as macros. You may notice that this array does not contain all builtin functions. The remaining functions like <code>bytearray()</code>, <code>int()</code>, or <code>str()</code> are actually constructors located in the <code>_PyBuiltin_Init</code>method also in <code>Python/bltinmodule.c</code>. (I don’t like using the word constructor when referencing these in python, but that’s another story.)</p>
<p>So, in order to add my custom builtin method, I just need to add a PyMethodDef to this array containing my function name, implementation, argument flags, and docstring.</p>
<h1>Implementation</h1>
<p>All changes implemented are in my copy of the source on <a href="https://github.com/PiJoules/cpython-modified">Github</a>.</p>
<p>I decided to try and isolate my additions as much as possible from the existing source so that it would be easier to reference later by making my changes stand out. The actual implementation of my builtin product function is pretty much an exact copy of the builtin sum function with a few minor changes.</p>
<h2><a href="https://github.com/PiJoules/cpython-modified/blob/master/Custom/custom.c">Custom/custom.c</a></h2>
<p>This file contains the implementation of my product function and the wrapper for it.</p>
<p>The implementation (builtin_prod_impl) does the exact same stuff as builtin_sum_impl, but with the default value changed from 0 to 1, the actual addition done by PyNumber_Add changed to PyNumber_Multiply for multiplication, and Fast Addition was removed. This Fast Addition was a way to speed up addition by storing the temporary sum in C space instead of Python space. The downside of this is that you always need to check for overflow since you’re working with data types that have a limited range of values. Now this can easily, and quickly, be done by comparing the sign bit of your result against the sign bit of your two numbers that you’re adding (which the interpreter does in this fast addition). However, for multiplication, I cannot find a quick way for checking if overflow occurred that does not involve dividing the product by one of the two numbers to get the other. Regardless, I ended up just using python’s builtin objects for multiplication since they handle overflow and large numbers already.</p>
<p>I have not profiled this, but I am curious to see if multiplying in C and dividing to check for overflow will still outperform PyNumber_Multiply.</p>
<p>The wrapper (builtin_prod) essentially unpacks the arguments into the iterable and starting value for the implementation.</p>
<h2><a href="https://github.com/PiJoules/cpython-modified/blob/master/Custom/custom.h">Custom/custom.h</a></h2>
<p>This file just contains the declarations for builtin_prod_impl and builtin_prod, the docstring, and the final macro to be included in the original <code>builtin_methods</code> array.</p>
<h2><a href="https://github.com/PiJoules/cpython-modified/blob/master/Python/bltinmodule.c">Python/bltinmodule.c</a></h2>
<p>To actually my <code>prod()</code> function to the builtin functions, I just needed to add my macro to the array of builtin functions.</p>
<pre><code class="C">static PyMethodDef builtin_methods[] = {
{"__build_class__", (PyCFunction)builtin___build_class__, METH_VARARGS | METH_KEYWORDS, build_class_doc},
{"__import__", (PyCFunction)builtin___import__, METH_VARARGS | METH_KEYWORDS, import_doc},
BUILTIN_ABS_METHODDEF
BUILTIN_ALL_METHODDEF
…
{"vars", builtin_vars, METH_VARARGS, vars_doc},
#ifdef USE_CUSTOM_BUILTINS
BUILTIN_PROD_METHODDEF
#endif
{NULL, NULL},
};
</code></pre>
<p>The <code>USE_CUSTOM_BUILTINS</code> is just a flag I can pass to the compiler that says whether or not I want to include whatever custom builtin functions I had.</p>
<h2>Makefile</h2>
<p>The last thing to do was just adjust the Makefile such that it accepted this flag, and compiled the product function. Nothing big.</p>
<h1>Usage</h1>
<p>After re-building python from source, I am able to use <code>prod()</code> in both the shell and from a python script.</p>
<pre><code class="python">>>> prod(range(1, 6))
120
>>> prod([]) # Empty iterable, default starting value of 1
1
>>> prod(range(5)) # Zero included
0
>>> prod(range(-3, 4, 2)) # Negative numbers, positive result
9
>>> prod(range(-5, 4, 2)) # Negative numbers, negative result
-45
>>> prod((1, 2), 10) # New starting value
20
>>> prod([2, 2**64]) == 2**65
True
</code></pre>
<h1>Conclusions</h1>
<p>Hopefully, this serves as a nice introduction into how new builtin functions are added. Feel free to send me an email if there’s anything blatantly wrong in this article.</p>
<h1>Resources</h1>
<p><a href="https://github.com/PiJoules/cpython-modified">Source</a></p>Fourier Transform Notes2016-06-13T16:13:00-04:00Leonard Chantag:,2016-06-13:fourier-transform-notes.html<p>These are just notes I put on this website so that I will be able to remeber the content and be able to review it easly for later exams. This is not meant to go into full detail about the Fourier Transform, so stuff like derivations and proofs will not be included b/c deriving this stuff requires a lot more research than I'd like to do.</p>
<h1>Fourier Transform</h1>
<p><strong>Fourier Transform</strong><br />
</p>
<div class="math">$$ F\{x(t)\} = X(\omega) = \int_{-\infty}^{\infty} x(t)e^{-j \omega t}dt $$</div>
<p><strong>Inverse Fourier Transform</strong><br />
</p>
<div class="math">$$ F^{-1}\{X(\omega)\} = x(t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} X(\omega)e^{j \omega t}d\omega $$</div>
<p>One of the main purposes of the Fourier Transform is to point out the dominant frequencies of a waveform. This can easily be seen with the FT of a regular sine or cosine wave.</p>
<p><img alt="FT and IFT of cosine wave" src="/images/fourier/cosine.png" /></p>
<p>The FT of a cosine wave is the dirac delta functions at <span class="math">\(+ω_0\)</span> and <span class="math">\(-ω_0\)</span>. This makes sense since the only frequency in a regular cosine wave is <span class="math">\(ω_0\)</span>. The reason for the peak at <span class="math">\(-ω_0\)</span> is because <strong>the FT of real waves are always symmetrical across the initial phase shift</strong>. In this case, φ = 0, so the FT of <span class="math">\(cos(2πt)\)</span> will be symmetrical across the y-axis.</p>
<p>For the example above, the FT is not an exact dirac function since I am only integrating over a finite set of data rather than an infinite set from t = -∞ to +∞. If I had more data points, the peaks would stand out more. Similarly, the reconstruction of x(t) from X(ω) is not exactly the same as the original x(t) because of the finite amount of data points I am integrating over in the frequency domain. <strong>For the continuous FT, the FT converts continuous data in the time domain to continous data in the frequency domain.</strong></p>
<h1>Discrete Time Fourier Transform</h1>
<p><strong>Discrete Time Fourier Transform of x[n]</strong><br />
</p>
<div class="math">$$ X_{s}(\Omega) = \sum_{n=-\infty}^{\infty} x[n]e^{-j \Omega n} $$</div>
<p><strong>Inverse Fourier Transform of X(ω)</strong><br />
</p>
<div class="math">$$ x[n] = \frac{1}{2\pi} \int_{2\pi} X_s(\Omega) e^{j \Omega n} d\Omega $$</div>
<p>In the real world though, data is not necessarily continuous. In the example used to generate x(t), I just found x(t) at very small increments of t to replicate continuous data. Really, I just took discrete values of x(t) sampled every Ts seconds. The DTFT allows us to essentially take the Fourier Transform of discrete/sampled data, though the spectrum is different in the frequency domain than the continuous FT.</p>
<p><img alt="continuous and discrete cosine wave" src="/images/fourier/cosinediscrete.png" /></p>
<p><img alt="FT and DTFT of cosine wave" src="/images/fourier/cosinedtft.png" /></p>
<p><img alt="Comparison between reconstructed discrete cosine wave and original" src="/images/fourier/cosinedtftreconstruct.png" /></p>
<p>In the example above, I take the FT of a continuous cosine wave and the DTFT of the cosine wave sampled every Ts seconds. In this case, Ts = 0.05 s and my sampling frequency (fs) = 20 Hz. The FT of x(t) returns X(ω), which should be two dirac delta functions at positive and negative <span class="math">\(ω_0\)</span>. The DTFT of x[n], however, returns Xs(Ω). This spectrum is different from that of the FT in that:</p>
<ol>
<li><strong>The DTFT is in terms of Ω while the FT is in terms of ω, where <span class="math">\(X_s(Ω)\)</span> = <span class="math">\(X_s(ωTs)\)</span>.</strong> (A similar example of this notation is how x[n] = x(nTs).) Though both have the same units (rad/s), the scale of the DTFT is smaller than that of the FT by a factor of the sampling frequency (fs = 1/Ts).</li>
<li><strong>The DTFT repeats every 2π.</strong> This is because the DTFT is the sum of various FTs at frequencies that are 2π apart from each other.</li>
<li><strong>The DTFT transforms discrete data from the time domain to continuous data in the frequency domain</strong> while the FT transforms continuous data in the time domain to continuous data in the frequency domain.</li>
</ol>
<p>Like the IFT, the original discrete signal can also be reconstructed from its DTFT. Theoretically, the continuous signal could be constructed from the discrete signal, though this would require an infinitely large sampling frequency to replicate an infinitesimally small dt.</p>
<h1>Resources</h1>
<ul>
<li><a href="http://www.mechmat.ethz.ch/Lectures/tables.pdf">Fourier Transforms Table</a></li>
</ul>
<h2>Source Code</h2>
<ul>
<li>Matlab<ul>
<li><a href="https://gist.github.com/PiJoules/cae4321693638e082495">Cosine Fourier Transform</a></li>
<li><a href="https://gist.github.com/PiJoules/e553751fdfad0338865a">Cosine Discrete Time Fourier Transform</a></li>
</ul>
</li>
</ul>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
var location_protocol = (false) ? 'https' : document.location.protocol;
if (location_protocol !== 'http' && location_protocol !== 'https') location_protocol = 'https:';
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = location_protocol + '//cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
mathjaxscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'AMS' }, Macros: {} }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Test Github Post2016-06-12T15:37:00-04:00Leonard Chantag:,2016-06-12:test-github-post.html<p>This is a placeholder for Github Projects category.</p>