Awhile ago I used pip to install python's cssselect. I'm not sure (should have taken notes!), but it seems like it was difficult, requiring that I get in and install various c libraries. This time it was effortless. Here are the instructions that I followed:
http://www.installion.co.uk/ubuntu/saucy/universe/p/python-cssselect/en/install/index.html
Recursive algorithms can be slow because they end up solving the same little problems over and over again. To speed them up, you can use a technique called "memoization." Memoization allows algorithms go much more quickly by remembering solutions to problems they have already solved. I’m the recursive algorithm. This blog is my memoization.
Monday, December 29, 2014
Labels:
apt-get,
cssselect,
pip,
python,
ubuntu universe
Editing Animated Gif duration with GIMP
I made a screencast using Byzanz that went well, except that I had given it a duration that was perhaps 20 seconds too long. The result was an animated GIF that needed to be truncated.
After a little research, I learned that you can do this with GIMP.
GIMP will import the GIF as a sequence of layers. And each layer will have a duration as part of its name. For me, I just had to delete the last layer, which was by far the longest in duration and the problem was solved. Alternatively, you can edit the duration of the layer by editing its name.
After a little research, I learned that you can do this with GIMP.
GIMP will import the GIF as a sequence of layers. And each layer will have a duration as part of its name. For me, I just had to delete the last layer, which was by far the longest in duration and the problem was solved. Alternatively, you can edit the duration of the layer by editing its name.
Labels:
Animated Gif,
Byzanz,
GIMP,
screencast,
truncate
Good argument for seperating pure functions from those that are not so pure
The Opinionated Guide to Python makes a nice argument for keeping functions purely functional (without implicit reliance on global variables and context and without side effects) whenever possible:
http://docs.python-guide.org/en/latest/writing/structure/#object-oriented-programming
This speaks to me especially after trying to test some of my methods in Django and seeing the time and pain costs associated with setting up sufficient context to test them.
http://docs.python-guide.org/en/latest/writing/structure/#object-oriented-programming
Carefully isolating functions with context and side-effects from functions with logic (called pure functions) allow the following benefits:
In summary, pure functions are more efficient building blocks than classes and objects for some architectures because they have no context or side-effects.
- Pure functions are deterministic: given a fixed input, the output will always be the same.
- Pure functions are much easier to change or replace if they need to be refactored or optimized.
- Pure functions are easier to test with unit-tests: There is less need for complex context setup and data cleaning afterwards.
- Pure functions are easier to manipulate, decorate, and pass around.
This speaks to me especially after trying to test some of my methods in Django and seeing the time and pain costs associated with setting up sufficient context to test them.
Saturday, December 27, 2014
Gif screencasts
I've been extremely impressed by webpages that have gif screencasts embedded in them. Today, since I committed to write some tutorials, I learned how to make them.
I'm using Byzanz, which can be installed on Ubuntu easily:
http://askubuntu.com/a/123515
You have to type byzanz-record to get it to run. And you need to specify a file name to record to. Other than that, you just run it.
There are some nice tips on how to limit byzanz to a single window, etc. here: http://primedotprog.com/creating-screencast-gifs-in-linux-ubuntu-14-04-lts/
There are some nice tips on how to limit byzanz to a single window, etc. here: http://primedotprog.com/creating-screencast-gifs-in-linux-ubuntu-14-04-lts/
Tuesday, December 23, 2014
Two approaches for running Selenium from cron
1) this is easy: give cron a display by adding "
http://blog.markshead.com/392/selenium-no-display-specified/
2) this should work from a server: run firefox headlessly:
http://www.alittlemadness.com/2008/03/05/running-selenium-headless/
export DISPLAY=:0
" to your crontab command: http://blog.markshead.com/392/selenium-no-display-specified/
2) this should work from a server: run firefox headlessly:
http://www.alittlemadness.com/2008/03/05/running-selenium-headless/
Saturday, December 20, 2014
Monday, December 15, 2014
Friday, December 12, 2014
Live data into Excel
I may have finally learned the native way to get live numbers into Excel:
http://www.windmill.co.uk/excel/excelweb.html
Can this also be used to publish numbers using Excel?
http://www.windmill.co.uk/excel/excelweb.html
Can this also be used to publish numbers using Excel?
Thursday, December 11, 2014
Simple widgets
I ended up adapting the method at Dr. Nic's site for widgets.
It's fairly simple. And it can be made nicer by adding in a call for the widget to load asynchronously. Actually, for my purpose, I needed to generate a whole class of widgets, which meant that the widget itself is generated dynamically. So I was able to simplify the code by getting rid of the second call to the server to get the data.
It's fairly simple. And it can be made nicer by adding in a call for the widget to load asynchronously. Actually, for my purpose, I needed to generate a whole class of widgets, which meant that the widget itself is generated dynamically. So I was able to simplify the code by getting rid of the second call to the server to get the data.
Tuesday, December 9, 2014
Great post on the regular expression module re in python
http://www.dotnetperls.com/re
It takes you through a series of examples on using the re module in practice. It might be fun to write a miniature book around this topic.
It takes you through a series of examples on using the re module in practice. It might be fun to write a miniature book around this topic.
Installing lxml
It turned out to be hugely difficult to install lxml on my Ubuntu lts 14.04 virtual instance.
I think there were several problems. But the really tough one was pointed out here:
http://stackoverflow.com/questions/16149613/installing-lxml-with-pip-in-virtualenv-ubuntu-12-10-error-command-gcc-failed
Basically, I needed more memory. So I enlarged the RAM for my digital ocean droplet.
(I found nice instructions for doing this here. I don't think I would have had the courage to destroy my own droplet without someone saying that they had done it. Even so, it was difficult.)
And the error changed from gcc error status 4 to gcc error status 1. It reported failing to find -lz.
So I added the libz-dev package and ran the following:
sudo python2.7 setup.py clean build --with-cython install
from a git clone of the lxml directory. So the good news is that now it's installed with cython so maybe it will be faster than it otherwise would have been.
I think there were several problems. But the really tough one was pointed out here:
http://stackoverflow.com/questions/16149613/installing-lxml-with-pip-in-virtualenv-ubuntu-12-10-error-command-gcc-failed
Basically, I needed more memory. So I enlarged the RAM for my digital ocean droplet.
(I found nice instructions for doing this here. I don't think I would have had the courage to destroy my own droplet without someone saying that they had done it. Even so, it was difficult.)
And the error changed from gcc error status 4 to gcc error status 1. It reported failing to find -lz.
So I added the libz-dev package and ran the following:
sudo python2.7 setup.py clean build --with-cython install
from a git clone of the lxml directory. So the good news is that now it's installed with cython so maybe it will be faster than it otherwise would have been.
installing missing libraries
Useful post for Ubuntu installations:
http://iambusychangingtheworld.blogspot.com/2013/08/ubuntu-solve-cannot-find-l-errors.html
I quote:
In Linux (Ubuntu), whenever you install a package and you see the following error:
...
/usr/bin/ld: cannot find -lxxx
...
It means that your system is missing the "xxx" lib. In my case, the error occurs when I install MySQL-python package:
/usr/bin/ld: cannot find -lssl
/usr/bin/ld: cannot find -lcrypto
So, the solution here is install those missing libraries. For my computer, I'just need to install the OpenSSL development lib, the libssl-dev:
sudo apt-get install libssl-dev
And everything will be OK.
http://iambusychangingtheworld.blogspot.com/2013/08/ubuntu-solve-cannot-find-l-errors.html
I quote:
Ubuntu - Solve "cannot find -lxxx" errors
...
/usr/bin/ld: cannot find -lxxx
...
It means that your system is missing the "xxx" lib. In my case, the error occurs when I install MySQL-python package:
/usr/bin/ld: cannot find -lssl
/usr/bin/ld: cannot find -lcrypto
So, the solution here is install those missing libraries. For my computer, I'just need to install the OpenSSL development lib, the libssl-dev:
sudo apt-get install libssl-dev
And everything will be OK.
Monday, December 8, 2014
The Hitchhiker’s Guide to Python!--the next book
I've been reading a chapter each day from the Django book. I'm now almost through working through the appendix. Now, just in the nick of time, I discovered where to go next.
The book is an open-source, ongoing project that is headed by my current python hero, Kenneth Reitz.
http://docs.python-guide.org/en/latest/
The book is an open-source, ongoing project that is headed by my current python hero, Kenneth Reitz.
http://docs.python-guide.org/en/latest/
Nice looking cron replacement for django
Better deployment
After committing some changes, I've been deploying them using git push.
Then I'd check the pages. Some (the ones that require changes to Python code) would fail to update and I'd need to restart ssh into the remote repository and restart Gunicorn. (kill -HUP process_number, where process_number can be found using grep).
Then, if I was smart, I would check the different instantions of web pages to make sure that everything was in order.
Anyway, today I wrote a little script called deploy. I made it executable, put it in a folder called 'bin' in my home directory, and put it on the path by adding an export command to the bottom of bashrc.
Right now it just chains four commands using &&, so that each one only begins upon the successful completion of the last.
#!/bin/bash
cd /home/doug/workspace/nnumscom/nnums
git push live master && ssh django@[host_ip] 'kill -HUP [process_number]' && ./nopen
nopen is a script that opens a set of about 12 web pages on the remote website so that I can give everything a quick visual.
The nice thing is that now I can take this and improve on it over time. And, of course, I replace three processes (two of which I might forget) with one.
Then I'd check the pages. Some (the ones that require changes to Python code) would fail to update and I'd need to restart ssh into the remote repository and restart Gunicorn. (kill -HUP process_number, where process_number can be found using grep).
Then, if I was smart, I would check the different instantions of web pages to make sure that everything was in order.
Anyway, today I wrote a little script called deploy. I made it executable, put it in a folder called 'bin' in my home directory, and put it on the path by adding an export command to the bottom of bashrc.
Right now it just chains four commands using &&, so that each one only begins upon the successful completion of the last.
#!/bin/bash
cd /home/doug/workspace/nnumscom/nnums
git push live master && ssh django@[host_ip] 'kill -HUP [process_number]' && ./nopen
nopen is a script that opens a set of about 12 web pages on the remote website so that I can give everything a quick visual.
The nice thing is that now I can take this and improve on it over time. And, of course, I replace three processes (two of which I might forget) with one.
Labels:
bashrc,
deployment,
gunicorn,
linux scripting,
ssh commands
Saturday, December 6, 2014
Web-based http requests
Labels:
authentication,
cool webpage,
get,
post,
web requests
Friday, December 5, 2014
Django default settings
From appendix D of the djangobook:
Here’s the algorithm Django uses in compiling settings:
Default Settings
A Django settings file doesn’t have to define any settings if it doesn’t need to. Each setting has a sensible default value. These defaults live in the file django/conf/global_settings.py.Here’s the algorithm Django uses in compiling settings:
- Load settings from global_settings.py.
- Load settings from the specified settings file, overriding the global settings as necessary.
Labels:
configuration,
default settings,
django,
global settings,
settings
Thursday, December 4, 2014
Awesome commands for importing web data to Google Sheets
http://googlesystem.blogspot.com/2007/09/google-spreadsheets-lets-you-import.html
=importXML("URL","XPath expression")
=importXML("http://www.google.com/search?q=live", "//a[@class='l']/@href")
=importHtml(URL, element, index)
=importHTML("http://www.google.com/search?q=define:live", "list", 1)
=importData("URL")
=importData("http://www.nnums.com/api/get/cccccc.n")
=GoogleReader(URL)
=importXML("URL","XPath expression")
=importXML("http://www.google.com/search?q=live", "//a[@class='l']/@href")
=importHtml(URL, element, index)
=importHTML("http://www.google.com/search?q=define:live", "list", 1)
=importData("URL")
=importData("http://www.nnums.com/api/get/cccccc.n")
=GoogleReader(URL)
Live updates of charts using Google Sheets
Once you generate live data in Google Sheets, you can use that data to generate live charts. These charts can themselves be embedded in email html or into a website.
Here's a nice overview:
http://www.christuttle.com/5-ways-to-enhance-websites-with-google-docs/#.VIC2n4_yOE0
Here's a nice overview:
http://www.christuttle.com/5-ways-to-enhance-websites-with-google-docs/#.VIC2n4_yOE0
Javascript library for aligning decimal points in a table
Apparently this is a bit of a trick in html, but here's a library if you have a table with columns:
https://github.com/ndp/align-column
Now if you don't have a table . . .
https://github.com/ndp/align-column
Now if you don't have a table . . .
Saturday, November 29, 2014
Why not to use Python's 'eval' in a public service calculator
Here's a fun write-up on python eval security issues:
http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html
Basically, even if you try to restrict access to any and all functions and classes, you can use lambda functions and introspection to get a huge amount of access.
The most fun example in the write-up is
This gives a list of all classes instantiated to that point in the program.
http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html
Basically, even if you try to restrict access to any and all functions and classes, you can use lambda functions and introspection to get a huge amount of access.
The most fun example in the write-up is
().__class__.__bases__[0].__subclasses__()
This gives a list of all classes instantiated to that point in the program.
Friday, November 28, 2014
cool tool in django: inspectdb
Say you have a legacy database and you want to put django on top of it. One line will construct python models for the database structure:
python mysite/manage.py inspectdb > mysite/myapp/models.pyYou can then clean up the result as explained in chpt 18 of the Djangobook: http://www.djangobook.com/en/2.0/chapter18.html
Labels:
awesomeness,
databases,
django,
legacy code,
python
Thursday, November 27, 2014
An address for a specific number on a web page
A fun game to play when you are surfing the web and see something of interest is to right click the object and then choose the option "Inspect Element." On my desktop in Chrome or in Firefox, what happens when I do this is that developer tools open up and I can look at the element itself.
So that brings up the question: can the address of that element be exported and used? Can you, instead of linking to a page, link to an element on that page? If so, can you use that to follow a number or a text field as it changes over time?
The answer, of course, is 'sometimes.' How quickly the element address breaks depends on the page.
The question of how to reliably get a particular snippet of information from a page is important and I'll be coming back to it as I build nnums.
So that brings up the question: can the address of that element be exported and used? Can you, instead of linking to a page, link to an element on that page? If so, can you use that to follow a number or a text field as it changes over time?
The answer, of course, is 'sometimes.' How quickly the element address breaks depends on the page.
The question of how to reliably get a particular snippet of information from a page is important and I'll be coming back to it as I build nnums.
Tuesday, November 25, 2014
Excel to web and back again
The goal here is to learn to take a number in Excel and then automatically post it to a site on the web.
A second goal is to automatically read a number from the web.
This guy used VBA in word to call a Windows library. So this solution is Windows only, but . . . maybe Excel is mostly Windows only anyway:
http://www.wayne-robinson.com/journal/2009/7/28/wordexcel-http-post.html
A more general discussion is here:
http://stackoverflow.com/questions/158633/how-can-i-send-an-http-post-request-to-a-server-from-excel-using-vba
From that discussion, one person used Querytables from VBA, which meant his solution worked for both Windows and Mac:
http://numbers.brighterplanet.com/2011/01/06/using-web-services-from-excel/
He also has a little bit to teach about going from the web to Google Docs and back.
But here's a more comprehensive resource if you're looking to learn about Google docs:
http://builtvisible.com/playing-around-with-importxml-in-google-spreadsheets/
Might need to learn about Xpath for this.
It's also possible to link to a specific cell in Excel:
http://support.microsoft.com/KB/197922
Web queries in Excel:
http://www.vertex42.com/News/excel-web-query.html
A second goal is to automatically read a number from the web.
This guy used VBA in word to call a Windows library. So this solution is Windows only, but . . . maybe Excel is mostly Windows only anyway:
http://www.wayne-robinson.com/journal/2009/7/28/wordexcel-http-post.html
A more general discussion is here:
http://stackoverflow.com/questions/158633/how-can-i-send-an-http-post-request-to-a-server-from-excel-using-vba
From that discussion, one person used Querytables from VBA, which meant his solution worked for both Windows and Mac:
http://numbers.brighterplanet.com/2011/01/06/using-web-services-from-excel/
He also has a little bit to teach about going from the web to Google Docs and back.
But here's a more comprehensive resource if you're looking to learn about Google docs:
http://builtvisible.com/playing-around-with-importxml-in-google-spreadsheets/
Might need to learn about Xpath for this.
It's also possible to link to a specific cell in Excel:
http://support.microsoft.com/KB/197922
Web queries in Excel:
http://www.vertex42.com/News/excel-web-query.html
Labels:
Excel,
get,
google docs,
http requests,
post,
xpath
Glyphicons and CDN
The bootstrap template for a Carousel home page has little arrows called glyphicons that are broken by default. Unless you use the versions of Bootstrap that are available through the CDN provider. Here's a description of the problem and solution:
https://stackoverflow.com/questions/18245575/bootstrap-3-unable-to-display-glyphicon-properly/18245741#18245
https://stackoverflow.com/questions/18245575/bootstrap-3-unable-to-display-glyphicon-properly/18245741#18245
Configuring nginx to serve static files in django
The setup that I use right now relies on Nginx to handle static files and goes to Gunicorn when there's a need to interface with Python.
I update the files on the server by using 'git push live master,' where 'live' is the name of a repository on the server that has hooks set to update a directory that lives in Django. That directory includes the static files (css, js) that my pages rely on.
It is possible to configure Django to automatically copy all static files to a single location. Then all you need to do make sure that that location is the one that your nginx configuration points at. See the docs here.
In my case, I keep all my static files gathered under a directory named "static" so I just pointed Nginx at that directory. It's reasonably likely that I'll need to change that pointer at some point, so I'm writing a reminder of what I had to do right here.
First I found the relevant configuration file for nginx. On my Ubuntu VPS that file was under
/etc/nginx/sites-available/username
where 'username' was the user that I was set up under. There is also a default configuration file, which would be used in the absence of the more specific username configuration.
Second, in that file I altered the following address to point directly to my static directory
location /static {
alias /home/path/to/my/static/directory;
}
Third, I refreshed nginx:
sudo nginx -reload
Fourth: There was no fourth step. That's it!
I update the files on the server by using 'git push live master,' where 'live' is the name of a repository on the server that has hooks set to update a directory that lives in Django. That directory includes the static files (css, js) that my pages rely on.
It is possible to configure Django to automatically copy all static files to a single location. Then all you need to do make sure that that location is the one that your nginx configuration points at. See the docs here.
In my case, I keep all my static files gathered under a directory named "static" so I just pointed Nginx at that directory. It's reasonably likely that I'll need to change that pointer at some point, so I'm writing a reminder of what I had to do right here.
First I found the relevant configuration file for nginx. On my Ubuntu VPS that file was under
/etc/nginx/sites-available/username
where 'username' was the user that I was set up under. There is also a default configuration file, which would be used in the absence of the more specific username configuration.
Second, in that file I altered the following address to point directly to my static directory
location /static {
alias /home/path/to/my/static/directory;
}
Third, I refreshed nginx:
sudo nginx -reload
Fourth: There was no fourth step. That's it!
Monday, November 24, 2014
Using git to update a website
Basically, git is meant for version control. You have a local repository that has live, working files. But your remote repository is meant to be bare--without any live files. This asymmetry is built into the system to avoid having people working on files that then get changed while they are working on them.
But what that means is that if you want to use git to update live files on a server, you have to go through a little extra work. I say a little, because all you have to do is set up a bare repository that will receive your push. And then you make a hook so that when that repository is pushed to, it will automatically update the files in your live folder, which will reside elsewhere. As usual, Digital Ocean has a nice tutorial on the topic:
https://www.digitalocean.com/community/tutorials/how-to-set-up-automatic-deployment-with-git-with-a-vps
But what that means is that if you want to use git to update live files on a server, you have to go through a little extra work. I say a little, because all you have to do is set up a bare repository that will receive your push. And then you make a hook so that when that repository is pushed to, it will automatically update the files in your live folder, which will reside elsewhere. As usual, Digital Ocean has a nice tutorial on the topic:
https://www.digitalocean.com/community/tutorials/how-to-set-up-automatic-deployment-with-git-with-a-vps
Reloading
I have Django running under Gunicorn under NginX on a digital ocean installation that I had set up out of the box. After upgrading Django to 1.7, I wanted to do a quick test that I hadn't broken anything.
So I tweaked the settings and found that nothing had changed. Restarted nginX and nothing changed. Finally I went to Gunicorn.
As mentioned here, you "gracefully reload" using the command
$ kill -HUP masterpid
But that means you have to find out what masterpid is.
Here's one way to find out, given here:
$ pstree -ap|grep gunicorn
look for the number on the top toward the left.
So I tweaked the settings and found that nothing had changed. Restarted nginX and nothing changed. Finally I went to Gunicorn.
As mentioned here, you "gracefully reload" using the command
$ kill -HUP masterpid
But that means you have to find out what masterpid is.
Here's one way to find out, given here:
$ pstree -ap|grep gunicorn
look for the number on the top toward the left.
Saturday, November 22, 2014
Setting up ssh with public key
https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets
This lets you set up a nicely protected server.
This lets you set up a nicely protected server.
Friday, November 21, 2014
Nice detailed look at Vectors in c++
Thursday, November 20, 2014
cron debugging: cat /var/log/syslog
So I wanted to test a cron job and make sure that it was running. I decided to make it something obvious, so I made my command
/usr/bin/firefox www.google.com
and told it to run every minute. And I got nothing. After restarting cron and making sure that the last line in crontab had a newline and everything I could think of, I finally looked at the syslog and learned that it was running. It just didn't output anywhere I could see it.
So here's another case where it's worth looking at syslog:
cat /var/log/syslog
The cron output isn't visible unless you direct it to a file that you choose and look at that file.
/usr/bin/firefox www.google.com
and told it to run every minute. And I got nothing. After restarting cron and making sure that the last line in crontab had a newline and everything I could think of, I finally looked at the syslog and learned that it was running. It just didn't output anywhere I could see it.
So here's another case where it's worth looking at syslog:
cat /var/log/syslog
The cron output isn't visible unless you direct it to a file that you choose and look at that file.
adding commands to manage.py (if you don't like the shell < script.py trick)
Say you want to do something periodically that will affect your django environment (like check email). You can schedule a task to do that using cron. But that task needs to be run within your django environment.
One way to do that is to use
python manage.py shell < script.py
(or, if you have your permissions altered via chmod +x,
./manage.py shell < script.py
)
It works.
If you want to get fancier, you can make a custom command for manage.py:
https://docs.djangoproject.com/en/dev/howto/custom-management-commands/
One way to do that is to use
python manage.py shell < script.py
(or, if you have your permissions altered via chmod +x,
./manage.py shell < script.py
)
It works.
If you want to get fancier, you can make a custom command for manage.py:
https://docs.djangoproject.com/en/dev/howto/custom-management-commands/
Wednesday, November 19, 2014
openshift
https://stackoverflow.com/questions/26871381/deploying-a-local-django-app-using-openshift
Simple site that makes money
http://www.randomamazonproduct.com/
Basically, it's hard not to keep pressing that refresh button. It's like a window into constant surprise. It might be worth recreating that project but narrowing the search to be within a particular area for within a particular price range or something.
Basically, it's hard not to keep pressing that refresh button. It's like a window into constant surprise. It might be worth recreating that project but narrowing the search to be within a particular area for within a particular price range or something.
Labels:
amazon,
first projects,
good ideas,
simple site
Unicode fun
Cool tool:
http://www.panix.com/~eli/unicode/convert.cgi?text=Unicode%20fun
Unicode fun
𝖀𝖓𝖎𝖈𝖔𝖉𝖊 𝖋𝖚𝖓
𝕌𝕟𝕚𝕔𝕠𝕕𝕖 𝕗𝕦𝕟
𝑼𝒏𝒊𝒄𝒐𝒅𝒆 𝒇𝒖𝒏
http://www.panix.com/~eli/unicode/convert.cgi?text=Unicode%20fun
Unicode fun
𝖀𝖓𝖎𝖈𝖔𝖉𝖊 𝖋𝖚𝖓
𝕌𝕟𝕚𝕔𝕠𝕕𝕖 𝕗𝕦𝕟
𝑼𝒏𝒊𝒄𝒐𝒅𝒆 𝒇𝒖𝒏
Tuesday, November 18, 2014
Ubuntu show desktop
super-ctrl-D
This blog is my memory for now.
This blog is my memory for now.
Changing fieldnames with django 1.7
It took me a little searching to learn how to do this. Unlike most migrations, django can't generate name changes automatically.
StackOverflow has a recipe for changing the name of a model and a couple of fields:
http://stackoverflow.com/questions/25091130/django-migration-strategy-for-renaming-a-model-and-relationship-fields
I simplified that recipe for the case when you only want to change a single field name:
https://gist.github.com/dhbradshaw/e2bdeb502b0d0d2acced
StackOverflow has a recipe for changing the name of a model and a couple of fields:
http://stackoverflow.com/questions/25091130/django-migration-strategy-for-renaming-a-model-and-relationship-fields
I simplified that recipe for the case when you only want to change a single field name:
https://gist.github.com/dhbradshaw/e2bdeb502b0d0d2acced
Labels:
change field name,
django 1.7,
migrate,
migrations
database migrations in django 1.7+
http://stackoverflow.com/questions/24311993/how-to-add-a-new-field-to-a-model-with-new-django-migrations
I found migrations a bit sticky in a couple of my applications because I didn't initialize migrations right off. Anyway, this link has a nice set of instructions that includes the case where you didn't start off initialized.
From the link:
I found migrations a bit sticky in a couple of my applications because I didn't initialize migrations right off. Anyway, this link has a nice set of instructions that includes the case where you didn't start off initialized.
From the link:
To answer your question, with the new migration introduced in Django 1.7, in order to add a new field to a model you can simply add that field to your model and initialize migrations with./manage.py makemigrations
and then run./manage.py migrate
and the new field will be added to your db. To avoid dealing with errors for your existing models however, you can use the--fake
:
- Initialize migrations for your existing models:
./manage.py makemigrations myapp
- Fake migrations for existing models:
./manage.py migrate --fake myapp
- Add the new field to myapp.models:
from django.db import models class MyModel(models.Model): ... #existing fields newfield = models.CharField(max_length=100) #new field
- Run makemigrations again (this will add a new migration file in migrations folder that add the newfield to db):
./manage.py makemigrations myapp
- Run migrate again:
./manage.py migrate myapp
Monday, November 17, 2014
Make css work the same across all browsers
Link the following stylesheet to keep CSS the same across all browsers:
<head>
<link rel="stylesheet" src="normalize-css.googlecode.com/svn/trunk/normalize.css">
Thanks Udacity.
<head>
<link rel="stylesheet" src="normalize-css.googlecode.com/svn/trunk/normalize.css">
Thanks Udacity.
Labels:
browser compatibility,
css,
html,
normalize stylesheets
Android Intents: really cool
Great "get-started" info and motivation on intents:
http://android-developers.blogspot.com/2012/02/share-with-intents.html
Here's a nice example of taking and sending a screenshot:
https://gist.github.com/nikreiman/2310318
http://android-developers.blogspot.com/2012/02/share-with-intents.html
Here's a nice example of taking and sending a screenshot:
https://gist.github.com/nikreiman/2310318
Nice tutorial for http access on Android
http://hmkcode.com/android-internet-connection-using-http-get-httpclient/
Make sure to go to the bottom to look at the asynchronous class.
Make sure to go to the bottom to look at the asynchronous class.
Friday, November 14, 2014
There will come a time when I need to build a chrome addon. But not yet.
Thursday, November 13, 2014
csrf and request API
Notes to self on csrf protection in Django:
Django has built-in csrf protection if you use their decorators and form system. In fact, by default you can't process a POST request without csrf protection. Unfortunately, that protection acts as a wall against API POST requests not generated by the system.
The solution is simple: two views. The view that you have to handle a GUI form needs the decorate.
A view that handles a curl or other programmatic request needs to be explicitly absolved of the decorator requirement. Protection must come from authenticating each request instead of relying on a previous login--but that's not a hassle to an automated system.
from django.views.decorators.csrf import csrf_exempt, csrf_protect
@csrf_exempt
@csrf_protect
Any guess which view needs which decorator?
The form needs csrf protection because it is relying on a previous login.
The API authenticates every time and needs to be csrf_exempt.
Django has built-in csrf protection if you use their decorators and form system. In fact, by default you can't process a POST request without csrf protection. Unfortunately, that protection acts as a wall against API POST requests not generated by the system.
The solution is simple: two views. The view that you have to handle a GUI form needs the decorate.
A view that handles a curl or other programmatic request needs to be explicitly absolved of the decorator requirement. Protection must come from authenticating each request instead of relying on a previous login--but that's not a hassle to an automated system.
from django.views.decorators.csrf import csrf_exempt, csrf_protect
@csrf_exempt
@csrf_protect
Any guess which view needs which decorator?
The form needs csrf protection because it is relying on a previous login.
The API authenticates every time and needs to be csrf_exempt.
Wednesday, November 12, 2014
Tuesday, November 11, 2014
Chrome developer tool saving
Nice video on a powerful tool. Most importantly, it shows you how to save the changes that you make.
http://www.youtube.com/watch?v=N8SS-rUEZPg
Also, see
http://stackoverflow.com/questions/6843495/how-to-save-css-changes-of-styles-panel-of-chrome-developer-tools
http://www.youtube.com/watch?v=N8SS-rUEZPg
Also, see
http://stackoverflow.com/questions/6843495/how-to-save-css-changes-of-styles-panel-of-chrome-developer-tools
Monday, November 10, 2014
Nice resource for working with the web from Android.
Working with the web is always chancy because you may not have connectivity. You don't want anything to hang, so you need your calls to be asynchronous. Here's a nice library that handles that for you:
http://loopj.com/android-async-http/
http://loopj.com/android-async-http/
Labels:
android,
asynchronous http client,
get,
post,
web
The core of django authentication
You can find nice information here:
https://docs.djangoproject.com/en/dev/topics/auth/
and here:
https://docs.djangoproject.com/en/dev/topics/auth/default/
There is a lot to learn. But for me the core of it is this:
https://docs.djangoproject.com/en/dev/topics/auth/
and here:
https://docs.djangoproject.com/en/dev/topics/auth/default/
There is a lot to learn. But for me the core of it is this:
- authentication comes built in by default with simple user objects that include username, password, and email
- if you have a username, a password, and an email address, you can set up a new user using
from django.contrib.auth.models import User >>> user = User.objects.create_user('john', 'lennon@thebeatles.com', 'johnpassword')
- you can check the identity of a user in a view that receives a post request using
from django.contrib.auth import authenticate
username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password)
Saturday, November 8, 2014
Repairing an archived program so that it is executable again
chmod -R +rx
Friday, November 7, 2014
Getting python path associated with default python version
Getting python path associated with default python version:
python -c 'import sys, pprint; pprint.pprint(sys.path)'
Semicolons to delimit lines, pprint to make the output pretty, sys.path to see what will actually be called.
In my case I learned that my virtual environment version of python is calling libraries outside of that environment.
python -c 'import sys, pprint; pprint.pprint(sys.path)'
Semicolons to delimit lines, pprint to make the output pretty, sys.path to see what will actually be called.
In my case I learned that my virtual environment version of python is calling libraries outside of that environment.
Labels:
pprint,
python,
python path,
python version,
sys,
sys.path,
virtualenv
Celebrating awesomeness: python request library and Uniprot
Uniprot makes is super easy to get protein sequences from a url.
The python request library by the awesome Kenneth Reitz makes it super easy to get the contents of a url.
Here's a screenshot from iPython:
The python request library by the awesome Kenneth Reitz makes it super easy to get the contents of a url.
Here's a screenshot from iPython:
Thursday, November 6, 2014
Wednesday, November 5, 2014
Lessons learned: don't remove python3 on Ubuntu. Also, how to get a python3 virtual environment with ubuntu 14.4
I was trying to create a virtual environment for python3.4 when I ran into the problem described here:
https://lists.debian.org/debian-python/2014/03/msg00045.html
Apparently, python 3.4 is deliberately broken by default in a way that pops up when you try to set up the virtual environment in the ordinary way. To quote the source:
https://lists.debian.org/debian-python/2014/03/msg00045.html
Apparently, python 3.4 is deliberately broken by default in a way that pops up when you try to set up the virtual environment in the ordinary way. To quote the source:
The current situation in the Python 3.4 package
is suboptimal because: % pyvenv-3.4 /tmp/zz Error: Command '['/tmp/zz/bin/python3.4',
'-Im', 'ensurepip', '--upgrade', '--default-pip']'
returned non-zero exit status 1
The bottom line in the quote above provided the key solution. But only after my big mistake.Although `virtualenv -p python3.4 /tmp/zz` does work.
My big mistake: removing python3
apt has me so spoiled that instead of turning to Google with the error message, I tried removing python3 with the plan to then reinstall it. It was while watching the messages pour out of package after package being removed that I realized my mistake. I had pulled out a critical part of the modern ubuntu environment and everything built on top of it was being removed. So I'm backing up all files on that installation and re-installation is coming soon.nginx, postgresql, django, virtualenv
Nice tutorial here on Digital Ocean:
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-django-with-postgres-nginx-and-gunicorn
The instructions fell apart a bit once I got to the section configuring postgres. Basically, when I expected to see a query about a role name it skipped that and started asking me about a password.
So for the configuration of postgres on Ubuntu, I recommend skipping over to https://help.ubuntu.com/community/PostgreSQL .
After that I went back to the guide.
I found that since I was using python3 and because of a bug in 'pip' where it wants to install things globally, I had to change the
'pip install psycopg2' inside my virtual environment to
'pip3 install psycopg2' to get a local installation with python3.
Next, there is a depracated command in the tutorial that no longer works.
Replace
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-django-with-postgres-nginx-and-gunicorn
The instructions fell apart a bit once I got to the section configuring postgres. Basically, when I expected to see a query about a role name it skipped that and started asking me about a password.
So for the configuration of postgres on Ubuntu, I recommend skipping over to https://help.ubuntu.com/community/PostgreSQL .
After that I went back to the guide.
I found that since I was using python3 and because of a bug in 'pip' where it wants to install things globally, I had to change the
'pip install psycopg2' inside my virtual environment to
'pip3 install psycopg2' to get a local installation with python3.
Next, there is a depracated command in the tutorial that no longer works.
Replace
gunicorn_django --bind yourdomainorip.com:8001
withgunicorn myapp.wsgi:application --bind yourdomainorip:8001
Tuesday, November 4, 2014
Bitbucket
Bit Bucket is an awesome place to start up a project.
You start a repository and it guides you through the steps to sync it with your home repository. And it gives you wiki and issue tracking built in. And you can share the project with others and keep it private at the same time. I love it.
You start a repository and it guides you through the steps to sync it with your home repository. And it gives you wiki and issue tracking built in. And you can share the project with others and keep it private at the same time. I love it.
Labels:
awesome,
bitbucket,
issue tracking,
version control,
wiki
Thursday, October 30, 2014
Got it and verified it: I can be seen from the outside. Loopbacks were the problem.
So it turned out that my server didn't support loopback and that once I worked out port forwarding I was on the web but that I couldn't see myself from inside the network.
I verified this by using a proxy server:
http://anonymouse.org/anonwww.html
which was sent to the external address given by Google to the query "what is my IP address".
I tried again with the web address that I had forwarded to the computer using an A record and it worked again where it hadn't worked before. I'm live!
I verified this by using a proxy server:
http://anonymouse.org/anonwww.html
which was sent to the external address given by Google to the query "what is my IP address".
I tried again with the web address that I had forwarded to the computer using an A record and it worked again where it hadn't worked before. I'm live!
Labels:
can't see server,
External,
home server,
IP,
Loopback,
proxy,
router,
web server
Port Forwarding
http://portforward.com/english/routers/port_forwarding/Trendnet/TEW-432BRP/Apache.htm
The link above, along with specific information on logging into my router, gave me what I needed.
I've now verified
The link above, along with specific information on logging into my router, gave me what I needed.
I've now verified
- That I can use port 80 to see the website on my internal network from another computer, and
- That port 80 is seen as open from the external network. (Nice tool for that at http://www.yougetsignal.com/tools/open-ports/ . I saw that it was closed, and then changed my router settings to match the target server's IP address, and watched as port 80 was opened.
IP settings in Ubuntu
Two nice articles from sudojuice, one to use the default Ubuntu UI and one to use the terminal (hint: you have to disable the UI first!!) to configure IP settings in Ubuntu.
A useful find there is nm-tool, which is alike a superior ifconfig.
Here they are:
http://www.sudo-juice.com/how-to-a-set-static-ip-in-ubuntu/
http://www.sudo-juice.com/how-to-set-a-static-ip-in-ubuntu-the-proper-way/
A useful find there is nm-tool, which is alike a superior ifconfig.
Here they are:
http://www.sudo-juice.com/how-to-a-set-static-ip-in-ubuntu/
http://www.sudo-juice.com/how-to-set-a-static-ip-in-ubuntu-the-proper-way/
Labels:
IP settins,
Port Forwarding,
static IP,
sudojuice
Trendnet router reset: This makes me feel safer
http://portforward.com/networking/forgot-router-password.htm
Basically, if you ever were to lose the username and password associated with your router, you can put it back to factory settings using the reset pinhole. On my Trendnet that pinhole is on the front right below the power jack.
For most TrendNet routers, the reset leads to
username=admin
password=admin
See the portforward link up top for more details if this doesn't work.
Basically, if you ever were to lose the username and password associated with your router, you can put it back to factory settings using the reset pinhole. On my Trendnet that pinhole is on the front right below the power jack.
For most TrendNet routers, the reset leads to
username=admin
password=admin
See the portforward link up top for more details if this doesn't work.
Labels:
Factory settings,
password,
Port Forwarding,
reset,
router,
Trendnet,
username
Getting the internal IP of your router
ifconfig is good for finding your internal IP number and other information, but to set up port forwarding you need to find your router internal IP number.
If you see an IP number that is referred to as "default gateway" or just "gateway," that's your router's IP number.
If you're connected by ethernet, you can find that number using the following command:
$ nmcli dev list iface eth0 | grep IP4
See askUbuntu.
Okay, I just found a better way for out-of-the-box ubuntu on sudo juice:
$ nm-tool
This will give you your router IP, which it calls "Gateway."
If you see an IP number that is referred to as "default gateway" or just "gateway," that's your router's IP number.
If you're connected by ethernet, you can find that number using the following command:
$ nmcli dev list iface eth0 | grep IP4
See askUbuntu.
Okay, I just found a better way for out-of-the-box ubuntu on sudo juice:
$ nm-tool
This will give you your router IP, which it calls "Gateway."
Labels:
default gateway,
ifconfig,
internal,
IP address,
router,
router IP
Ubuntu, tasksel, and the carot
Nice find by Shantanu on the caret and tasksel:
If you have come across a tutorial or just someone on a forum who tells you to install something in Debian/Ubuntu that involves using apt-get, it is ok for you but when they tell you that you need to use a caret symbol (^) at the end, that’s where you become curious. What is even more weird is that when you search for the name of the package that the given command seems to install cannot be found using apt-cache search. e.g. You will see this used most often when someone tells you how to install LAMP server setup (Linux-Apache-MySQL-PHP) by using the command “sudo apt-get install lamp-server^”. If you miss the caret at the end or try to search for lamp-server, it just doesn’t work.
Well, the answer is that the caret symbol is a short form for performing a task that otherwise the program “tasksel” would have done with the given package name. tasksel is a program to ease the installation of commonly used things that go together for a particular use. e.g. In the above instance of LAMP, the four packages and their dependencies are always used together, so tasksel provides a sort of a meta-package or meta-task that can be run by the user with a single command and then tasksel will take it upon itself to install all of them and set them up correctly for your use. Now, apt-get provides a way to perform that same task by itself without you having to install tasksel first and all you have to do is to give that same package name to apt-get but just append a caret at the end to tell apt-get that it is a tasksel package/task identifier and not a regular package name in debian/ubuntu repositories.
The Bights Plan
Yard by yard, it's hard. Inch by inch it's a cinch.--Anon
by small and simple things are great things brought to pass--Alma 37:6
Yesterday I felt like I was throwing myself in every direction and getting little done. I have a monolithic task to work on, and it seems to grow and divide as I stare at it. So in the evening I went to sleep after praying for help in figuring out how to work effectively. This morning I couldn't sleep and finally went on a walk a little before 5:00 am. It was truly beautiful, with sweet cool air and a dark sky with stars. The result of all this is the beginning of a plan, which I will refer to as my bights plan.
Basically, bights are very small and concrete accomplishments. You want to get in a lot of bights in a day, perhaps 15. An idea for a bight might be to learn something. But learning something doesn't produce a concrete result. And it doesn't, in and of itself, make the world any better. So if your bight is to be to learn something, then you can give it a concrete output by documenting what you have learned.
A bight has to have an objective output, and you have to determine what that output will be before you begin working on the bight. The objective should be simple to describe and you should try for bights that don't take longer than 20 minutes to achieve.
Another bight might be to write tests for a function. The concrete result is that you have tests. You commit them and describe them.
The more people a bight can potentially help, the better the bight. The more you learn or grow in capability from taking the bight, the better the bight.
Often when I'm working I face a decision or I need to answer a question. Sometimes, these decisions and questions are too large for a single bight and need to be broken down. But usually what I've done in the past is I have researched the question and learned what I needed and then acted accordingly. This has had two problems. One is that in acting this way I only really helped myself. The other is that I myself can easily forget what I learn. So it was clear to me as I was walking this morning that I needed to start documenting what I learn. And that it would be even better if I were to publish the things that I learn so that other people can benefit.
So that's the plan.
Labels:
bights,
efficiency,
Pomodoro,
this blog,
time management
Monday, October 6, 2014
Cyclopeptide hunting: two resources
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3242011/
http://www.cybase.org.au/
http://proteomics.ucsd.edu/
http://www.cybase.org.au/
http://proteomics.ucsd.edu/
Subscribe to:
Posts (Atom)