Well, it seems we're in the homestretch of the SoC. I've been working on the FFLU decomp. Strangely I ran into some problems with modules not being installed properly but that finally regulated itself. Now I'm in the debugging phase and hopefully we're have a nice new fraction-free LU function.
I'm thinking that maybe I can squeeze in another Fraction-free inversion algorithm before next week...
*edit* Did I say inversion? I meant QR. So LU seems to be working but my QR one is still buggy. Oh well.
Monday, August 13, 2007
Friday, August 3, 2007
Typo
So for whatever reason I kept saying block LU, when I really meant fraction-free LU. Those are, of course, significantly different. I've been looking at this paper suggested by someone on the mailing list. It seems reasonably straight-forward (even the proof, which is surprising). I hope to finish the LU one soon and hopefully the QR one before the project finishes up.
Friday, July 27, 2007
new core
So the new core was added by the SymPy crew and it seems much faster than before - at least according to my test cases. Mateusz tells me my module worked out-of-the-box, which is always nice to hear.
I managed to put together a nice (in my opinion) tutorial for my linear algebra pack. Though I will say the inability to have a table of contents of html anchors make it very hard to read and navigate the page. This is usually something quite easily done in a wiki.
The tutorial brought to light a couple of stupid mistakes and inconsistencies I had made so it was helpful in the debug process too.
I'm thinking block LU decomp next week, kind of an advanced thing as we come to the end of the project....
I managed to put together a nice (in my opinion) tutorial for my linear algebra pack. Though I will say the inability to have a table of contents of html anchors make it very hard to read and navigate the page. This is usually something quite easily done in a wiki.
The tutorial brought to light a couple of stupid mistakes and inconsistencies I had made so it was helpful in the debug process too.
I'm thinking block LU decomp next week, kind of an advanced thing as we come to the end of the project....
Tuesday, July 17, 2007
Back to bugs
Had to write some extra code for reducing to row echelon form but now it seems like my eigenvalues and eigenvector functions work properly (with help from Robert's nice polynomial work).
So it's back to that pesky expression bug from before the midterm evaluation. To recall, for whatever reason my Gram-Schmidt function isn't properly evaluating expressions. Back to the grind...
So it's back to that pesky expression bug from before the midterm evaluation. To recall, for whatever reason my Gram-Schmidt function isn't properly evaluating expressions. Back to the grind...
Monday, July 9, 2007
Midterm evaluation
Well, we're about 1/2-way done now. I guess I'm pretty happy with both my contribution and what I've learned (coding-wise). I think the matrix manipulation functionality is pretty high now, there's an inverter, a different solver, some factorization methods and some other stuff. I guess some of today will be spent filling out this survey.
I guess my next step will be to try and flush out the expression evaluation problem. Then add a bit more new eigenvalue-related functionality. I've been in touch with Robert who has help me out by writing a wrapper for me to use for solving for eigenvalues. Thanks Robert!
After that, who knows? I'm considering adding more matrix types (e.g. integer) with more optimized algorithms behind the scenes but I'll cross that bridge after the eigenvalue dance.
I guess my next step will be to try and flush out the expression evaluation problem. Then add a bit more new eigenvalue-related functionality. I've been in touch with Robert who has help me out by writing a wrapper for me to use for solving for eigenvalues. Thanks Robert!
After that, who knows? I'm considering adding more matrix types (e.g. integer) with more optimized algorithms behind the scenes but I'll cross that bridge after the eigenvalue dance.
Monday, June 25, 2007
Sparse matrices, so close I can almost taste it
So the sparse matrix implementation has begun (and almost ended). In the spirit of python I decided to try using python's built-in dictionaries for my sparse matrix implementation.
With limited experience in deep OOP, I've had to quickly learn the subtleties of OOP. The nature of scope really comes into play in a way that can be quite annoying.
*edit* 2 hours later, I have tasted it and it tastes good.
With limited experience in deep OOP, I've had to quickly learn the subtleties of OOP. The nature of scope really comes into play in a way that can be quite annoying.
*edit* 2 hours later, I have tasted it and it tastes good.
Tuesday, June 19, 2007
Circular imports
So in moving a function to solvers.py, I recently came across an interesting phenomenon in python. It's very well-illustrated here:
http://www.thescripts.com/forum/thread22561.html
It boils down to the fact that you can't do circular imports except in a specific manner. Now, I'm not one for hacking around this specific type of behaviour - I think you can just restructure the code so as not to do this - but what is the reasoning for why this works with "import *" but not with "from * import *". Interesting....
- C
http://www.thescripts.com/forum/thread22561.html
It boils down to the fact that you can't do circular imports except in a specific manner. Now, I'm not one for hacking around this specific type of behaviour - I think you can just restructure the code so as not to do this - but what is the reasoning for why this works with "import *" but not with "from * import *". Interesting....
- C
Subscribe to:
Posts (Atom)