Making a special string with __str__

“Reuse!”The battle cry of Object Oriented aficionados

Occasionally you really want to use a library so that you don’t have to write your own version of whatever the library provides. But, there’s just one little thing that it doesn’t do. Here’s a story of when this happened to me and how I managed to get around it in a creative manner!

At work we are using Elasticsearch as a datastore for some logging. For “reasons” Elasticsearch doesn’t encourage the use of TTL (time to live) on its records, instead they encourage you to just name your indexes after today’s date and then delete the index when it is past your TTL.

And this is ok. But… if you want to use a library like logzio-python-handler this can be a problem. That library has some awesome capabilities but one limitation it has is that expects the index you are writing to is going to be static and unchanging.

If you have a long running server process this can be a problem. You don’t want your logs from August 4th being written into the July 14th index because that was when you started the server. You want your logs written to their daily index! But you have to supply a string to the library for it to know where to write to. What?!?!?

It would be really impractical to create a new logging handler object every time I needed to write to a logging message!

I need a magic string

So when I was faced with this problem recently I thought about it for a few minutes. It occurred to me if I could pass a function to the library and let that function get called and generate the correct string, that would solve my problem.

See, a string is an object. And when an string is being printed out Python calls the str() method on the object to get that string. So all I needed to do create my own object with its own special str method! Here’s what I did:

When the logzio logging handler runs it is going to call that MagicURL’s str() method which is going to figure out today’s date and plug it into the URL, and then return that to the framework. At that point the the messages will write to the correct index.

The advantage of this is that as you app stays up for days and weeks (it does, doesn’t it?) the logging messages will automatically roll forward into the new index every day.

The other huge advantage here is that you don’t have to change the library in any way. You are simply passing in an object with special behavior and letting the library be a black box.

Here’s what it looks like to call this in action:

The end result is that we got to use this library (instead of trying to re-implement it ourselves) and we got the behavior we needed out of it. A win-win!

Wrapping up

The next time you see something that “just takes a string”, remember that you can define the string with a little bit of magic. The str method lets you inject more runtime logic into places it wouldn’t normally go!

Python Debugging

cool beetle from https://pixabay.com/en/bug-insect-beetle-wasp-yellow-34375/Python is an awesome language and environment to work in. And thanks to some great tools Python debugging can actually be fun!

Let’s look at some of the things that separate Python debugging from debugging in other languages:

Interactive debugging

Compared to other languages like Java, Python values interactive tools like the REPL. The REPL (Read-Evaluate-Print-Loop) allows Python developers to “experiment” on code without having to go through the usual write/save/compile/run cycle.

This feature carries over into the built in Python debugger pdb. With pdb you can do all of the normal debug operation like stepping into code, etc, but you can also run simple arbitrary Python code!

Command line first

With everything moving to “the cloud” these days things the command line is becoming more important than ever. Since most Python debugging tools are build off of pdb, it is now super convenient to use the debugger on a remote machine.

Simply ssh into your remote machines and boom, you can start using pdb just like you would on your local machine.

Hopefully this isn’t something you will need to do often, but as we all know sometimes things happen in production that just don’t happen on your local dev machine. It is great to have this option!

Choices!

While pdb is pretty cool as it is, there are other choices and options to make it even more awesome! Here are some command line tools that can make your Python debugging experience more enjoyable:

  • pdb++ — Just `pip install pdbpp` and you will get a new coat of paint on pdb with tab completion, colors, and more!
  • PuDB — A cool text-based GUI for debugging
  • better_exceptions — A pretty printer for your exceptions

And of course there are more visual oriented tools, for those who prefer working in Integrated Development Environments (IDE’s). Here’s some great ones that I have used:

  • PyCharm — My preferred Python IDE. Lots of great things in this tool, and I highly recommend it to everyone.
  • Wing IDE — Another popular IDE I have used off and on over the years.
  • Eclipse — Is there anything Eclipse can’t do? With the installation of a few plugins it becomes a decent Python IDE.

Each of these offers the ability to set breakpoints, examine the stack, and all kinds of other debugging goodness all in a nice and easy to look at format. If you are just starting out with Python I highly recommend checking them out to help guide you as you learn the language.

More on Python Debugging

I’ve collected my best tips on Python Debugging into an e-book called “Adventures In Python Debugging”. Check it out over at PythonDebugging.com. There’s a free 5 day email course if you would like to get a sample of the book and learn more!Adventures In Python Debugging book cover

The curse of knowledge: Finding os.getenv()

Recently I was working with a co-worker on an unusual nginx problem. While working on the nginx issue we happened to look at some of my Python code. My co-worker normally does not do a lot of Python development, she tends to do more on the node.js side. But this look at the Python code lead to a rather interesting conversation.

The code we were looking at had some initialization stuff that made my coworker said “Hey why are using os.environ.get() in order to read in some environment variables?” She asked “Why aren’t you using os.getenv()?” I stared blankly for a second and said “huh?”

I was a bit puzzled by this question because this developer is really good with node and also with Ruby. Perhaps they were thinking of a command in a different language and not Python I thought to myself. Together we looked it up real quick and much to my surprise I discovered there actually was a command there in the standard library called os.getenv() and it does exactly what you think it would. It gets a environment variable if it exists, and returns None (or a specified value) if it doesn’t exist.

Using os.getenv() is a few characters shorter than using os.environ.get() and in the code we were looking at it just looked better. Since the code didn’t need to modify the environment variables, it just made sense to use it. But it got me thinking: I’ve been working in Python for a few years now, how did I not know about this?

You don’t know what you don’t know

For me this was a real educational moment. It is very easy to think that we know it all, especially with things that you use day-in and day-out. But, you should never think that you know everything about a language even if you are an expert. There are people around you who, even though they might be experts in different languages or technology, still have something interesting to offer to you and your code.

Have a conversation with someone who is either junior or senior to your skill level. Very quickly one of you will discover something new. For example, the junior person could discover a new approach to solving a problem. And a senior person can get a new perspective.

The curse of knowledge: how I discovered os.getenvThe second situation is one that I really identify with. As you become more “senior” in most things you begin to suffer from “the curse of knowledge”. This means your knowledge advances to a point where you can no longer realize that something is beyond a beginner. The danger with that is that you develop a new set of assumptions about everything and you stop questioning things in the manner you used to.

If you are not aware of this, it can lead to some nasty things. (Think arrogance, blind spots in the code/system, etc.) It also can lead to conversations that unintentionally intimidate others from participating in your development process in an effective manner. No matter how you slice it, this is a very bad thing.

Having a second set of eyes, especially those that come from a different background, can really help surface issues in your code. That is always useful. In this case I was very fortunate and was able to get some insight into code that was working but perhaps a little bit inefficient. Now I have code that looks a lot better when it gets to the code review.

Learn from this

So, today go and talk with someone who has different areas of knowledge or experience levels than you. Something good will probably come of it soon.

 

Debugging Flask, requests, curl, and form data

Here’s a recent situation I found myself in where some HTTP form data was not appearing like we expected.

Debugging Flask

The basic setup is this: A Django process is replaying some HTTP traffic to another system that is written in Flask. The issue was that some requests that were coming in had form data that wasn’t making it to the other system.

To help troubleshoot this, I created a simple flask app that would echo out the headers, body, and form fields it saw on incoming requests. Let’s call this the receiving program. The idea was that we could point our relay app to that address and dump out everything so we could see what the issue was.

The first thing that I noticed was that our form POSTs did not have any of the form fields I was expecting. There was nothing in the request.form or request.body fields.

At this point I was concerned that there was something I was missing in how flask was either reading the request or in how it was sending it. To narrow it down I chose to use curl to send requests to my receiving program.

This revealed what turned out to be the first problem: The receiving program was looking for form data, but the replay program wasn’t sending it. When I did a curl command like this:

curl http://receiver/hello –data ‘{“my”:”form”,”data”:”blah”}’

I would see the receiver print out the data. So that pointed to my replay code as being a source of the problem.

Sending form data with requests

The replay code uses the most excellent Requests library to do its HTTP communication. Requests is very easy to use, most of the time just doing a requests.post(url, data=<your data to send>) is all you need to do. But for form data there is another option.

It turns out you can also send multipart form data by swapping out the data parameter with the “files” parameter. This is where my debugging went off the rails for an hour.

The wrong path

My original code was using the data parameter but I wasn’t seeing anything pop out in the receiver. Putting 2 and 2 together I managed to get 153 and figured I must be using the wrong parameter so I replaced data with files and retested.

To my surprise, the receiving program was still not seeing any form data! In the flask code looking at request.form revealed an empty string!

After using pdbpp to step through the code and inspect the request object closer I made a surprising discovery: The data I sent was in the request.files field!

Thoroughly confused I killed the receiving program and replaced it with the nc command. NetCat (nc) is a handy utility that can send or receive data on a socket. I had reached a point where I didn’t understand why or how Flask was getting the data and manipulating my HTTP request.

Invoking the command:

nc -l 5000

Makes nc listen on port 5000. As it listen it dumps out what it receives. Since HTTP is a plain-text protocol, I could see exactly what it was sending. In this case it was sending:

Which looks pretty different compared to what curl was sending:

The big difference is that one has the markers for multipart and the other doesn’t. What gives?

The multipart is just that: “Multiple Parts”. As when you are sending things mixed together in the same requests like HTML and images. The plain form (the 2nd example) doesn’t have that because we are declaring in the header that the entire request is going to be one type. For my replay code, this is what we were doing in the first place, and it was correct.

Where’s the beef?

So at this point we have walked in a giant circle. It turns out I was sending the data correctly, but it wasn’t being seen. What gives?

Going back and investigating the original replay code I focused on logic where we handle form encoded requests. It turned out we had a nasty bug in how we detected and handled form data.

To identify requests with form data we were looking at the Content-Type field and looking for “form-data”. The code looked like this:

If request.content_type == “form-data”

This is a bit of a problem because the accepted Content-Types for form data have a lot more text in them. (Specifically “application/x-www-form-urlencoded” and “multipart/form-data”) This resulted in us never looking at the request.form field to get the data! For the morbidly curious, the next few lines took data from request.body which is blank if the Content-Type is set to some kind of form data.

Further down the line when it was time to replay the data, we took what happened to be a properly formatted Content-Type and then passed along an empty string in the data field.

As soon as I changed the logic to:

If “form” in request.content_type:

The code started working as expected. It detected the form data properly, and then put it into the correct spot before transmitting to the receiving program.

The lessons learned

First and foremost, make sure you are sending the data you think you are. 🙂 Other lessons:

  • Even though form data can look like the body of an HTTP request, Flask will treat it differently if the Content-Type is set correctly
  • Using curl to send “correct” requests is a great way to confirm your code is sending the data you think it is.
  • Debugging flask sometimes means using other tools. Using netcat/nc to dump out the data is an even better way to make sure you are really sending what you think you are sending.

pip and private repositories: vendoring python

At work I am working on a project to migrate a series of Python apps into the cloud. Docker is a perfect fit for some of the apps, but one problem we ran into is getting our apps to build when they have a dependency on a private repository. Using a technique called vendoring we are able to work around this problem and ensure that our dependencies are well known. Let’s look at vendoring python code.

Vendoring Python: The basic problem

When docker builds an image we have it execute pip install -r requirements.txt to have install all of our Python dependencies. Inside of our requirements.txt file we have the normal dependencies like this:

oauthlib==0.7.1
requests==2.4.3
requests-oauthlib==0.4.2

But we also have some dependencies that live in private repositories and those have entries that look like this:

-e git+https://github.com/company-name/private-python-utils.git

This line tells pip to go to github and pull down that project. The catch is that for a private repo pip has to access to an ssh key that has access. If you run pip from the command line the operating system will supply that ssh key and pip is able to install the project.

When docker runs pip, it does not have access to those ssh keys. As a result, the pip install fails because it can’t see the repository.

Python vendoring: put your dependency in a safe place!

Source: https://flic.kr/p/52ZAMB

Shopping local with people you trust

It might be possible to add a key to docker to allow it access, but then this becomes a management pain: every thing that tries to run docker build is going to have to be setup with that key. (Think about CI services, new developers, etc.)

Instead, a better solution is to “vendor” the code. This means taking a specific snapshot of the project and putting it into your project. As in checking it into git. I first saw this technique being used by people in the Go Lang community. They were doing it as a way to guarantee they were working with a “known” piece of code. (“known” meaning that they had done a security audit on it, etc.)

Let’s walk through the high level steps and then discuss the reasons and details.

Package up the dependency

In Python, there is a special file called setup.py that lives in the root directory of a project. For libraries this is a useful file to have, it describes the project and its dependencies. (Side note: if you are going to put a project into pypi.python.org having this file is a requirement)

For details about setup.py I will refer you to this excellent article. This will get you up and running with a bare-bones file which is good enough for this exercise.

With that file in place, the next step is to package up your code using the command:

python setup.py sdist

That will create a directory called dist which holds a copy of your project in an install-able form. I work almost exclusively on Linux systems and by default there it seems to produce .tar.gz files.

Adding the dependency

The next step is to take that distributable file and put it into a directory in the base of your project. As a convention, most people will call this directory “vendor”. This identifies it as things that are external-yet-essential to the project.

Once the distributable file is there, the next step is to commit it to add it to version control. By doing this you guarantee that your code is now working against a known version of the dependency. This is a big deal in environments where immutability and repeatable builds are valuable.

Updating the requirements.txt

The final step is to update the requirements.txt file so that pip will be able to find and install the library. This is surprisingly easy to do. Simply change the line (see above) to:

vendor/private-python-utils.tar

And now when pip runs, it will look in the vendor directory for that file and then install it from there. At this point you are vendoring python! The code should be ready to go.

Pro-tip

One thing I like to do when creating a setup.py file for a library is to include something to get the current git tag and commit information. This can be included into the name of the distributable file which helps identify which version of the library you are working with.

Sometimes a gist is worth a thousand words, so here’s an example of how to do this. (If you are not using git as your source control there is probably a similar way to do this.)

Wrapping up vendoring python

By this point you should have everything in place for an “external” system like docker or a CI server to be able to build your project. As long as it can run pip it should be able to find the dependency and install it.

If you want to see another example of vendoring packages from github repositories, check out this link here for a great overview of using some of pip’s lesser known features.

With this in place you should be able to feel more secure about the code you are running because now the version really locked down.

Cleaning up legacy python code

Python is growing in popularity which is a great thing! And with that growth we now are seeing more and more legacy python project. Occasionally you are going to inherit one of these “legacy” projects. Here’s some tips on how to get it under control.

What is a legacy project?

Over the years I have heard lots of definitions of what a legacy project is. Here’s a few of the gems I’ve heard used to describe these projects:

  • An “older” project that has been around forever
  • A code base without any kind of tests
  • The project that no one wants to work on
  • “Everyone who worked on this left the company years ago…”

And sometimes the code isn’t old. A lot of times a project will be done real quick and put into production before everyone realizes that there’s a better way. (e.g. using a different framework) By the time that happens the “legacy” project is doing an adequate job and management is afraid to touch it. This is a pretty legitimate concern to higher ups in a company: “If it ain’t broke, why try to fix it?

So, what are the best things to do with a legacy project?

Make sure it is in version control

Before you do anything, make sure the project is in some kind of version control system. Too often I have seen “proof of concept” programs that were thrown together and pushed into production without any thought given to making sure that the code was somewhere safe.

This is doubly true if the code is anyway important to business operations: You do not want to be the last person who was seen with the only copy of the code. If for some reason this project is not in git/subversion/etc. put it in there NOW. If your company doesn’t have a version control system, beg your manager to invest in one ASAP.

Delete commented out code

Once a project is under version control, one of my favorite tasks is to delete any commented out code.

The older the code base the more commented out code there tends to be. I’m not talking about 1 or 2 lines here and there. I encounter dozens to hundreds of lines of code commented out on a fairly regular basis.

Commented out code is a waste of cognitive time. New developers (like you) who look at the code will see all of that code and try to understand why it is there but hidden in a block of comments.

In my experience it is there because something in the project changed and someone is hedging their bets that the old code will be needed again. So it gets commented out and then haunts the code base forever and ever.

Once the code is under source control, the fact that it got deleted will be recorded in the repository. If for some reason it becomes necessary to revive that code, any developer on the team can go and revisit the commit history and pull out just what is needed. (Spoiler alert: Most of the time you will never need that commented out code.)

Running tests/adding tests

Now that your legacy python project is under version control, we can start do some more interesting things. One of the first things I like to do is to try and run any unit tests that might be there.

A WORD OF CAUTION: Sometimes a project will have unit tests that aren’t quite “unit” tests. In other words, beware of any tests that might reach out and talk to a live system. I was bitten by this recently when I ran a set of unit tests that were doing destructive things to a production cache system. Thankfully we were able to recover it quickly, but I still get grief about it every few weeks.

If there are no tests, this is a great time to add some. Most bosses are cool with test because they usually don’t impact the existing code. Of course, check first before you add anything.

As you are running the tests, consider using coverage.py to see how well the code base is being covered by the tests. If there are any “critical” spots where the tests aren’t hitting, those should be the first spots you should write tests for.

Pylint/vulture

Something to consider doing at this point is running some type of linter over the code to see how “healthy” it is.

A linter is a bit of software that examines code and looks for anything suspicious or “wrong”. Pylint is a great python specific linter that can look at python code and offer some suggestions.

By default pylint is pretty verbose and will flag all kinds of things that might not really be that important (such as lines longer than 79 chars). There are ways to control the output, and you should check here for more information about that.

So what should you look for in the pylint output? Personally I like to look for:

  • Unused variables
  • Anything that is notes as a potential bug

If this project is going to be worked on, then these are things that you probably should consider fixing them as you add features. Removing unused variables is no brainer. It reduces visual noise and helps developers reason with the code.

Things that are flagged as potential bugs: These need to be handled carefully. Sometimes the code is working in spite of the bug.

Flake8? Not so fast!

Since I’ve mentioned pylint and it’s awesomeness, I should also mention formatters like pep8 or my favorite flake8. These tools can be used to reformat python code to make it more pep8 complaint.

While this is normally a good thing, I don’t recommend doing this on a legacy project right away. My reason for saying this is because if the code is working (especially if it is in production) any changes made to it should be minimal so that you preserve the code as it is.

While most of the time the tools will not modify the code in any destructive way, I have seen strings that were too long get mangled a little bit. This can lead to confusion about what the line was actually doing.

My personal approach would be to have unit tests in place first, and then apply flake8. Also, if you are developing new feature to add to the code base then you should be using flake8.

More idiomatic python

By this point you probably have a really good grip on your legacy project. Depending on what the future holds for the project (new features, or just maintenance) you might want to consider revisiting some of the suggestions from pylint.

One thing I have seen pop up in legacy code projects is code that is “unpythonic”. This includes things like badly formed for loops, checking if “something == None”, and other spots where things could just be more idiomatic.

This is a topic I really enjoy learning more about as I feel I am still learning the true way. If you would like to learn more, I highly recommend reading Effective Python as it is full of great examples of pythonic coding techniques.

Wrapping up

With the exploding popularity of python we are now starting to see more and more legacy python projects. Thanks to some basic tools and the beauty of the language itself, this doesn’t have to be a scary proposition like it is with other languages. Java, I am looking directly at you.


Pssst… Quick favor?

I’m putting together a course on Debugging Python code. If you want to get in on this, sign up below to learn more!

Sign up to learn more!





python __debug__

The other day I stumbled upon Python __debug__ and I thought I would share some interesting things I learned about it.

What is python __debug__?

It is a constant that Python uses to determine if calls to assert should result in code being generated. If you have the -O optimization flag  set, then assert calls will not be “triggered” in your code, even if the condition it is testing is true.

Interesting fact: according to the documentation it is one of two constants that will raise a SyntaxError exception if you attempt to assign something to it. (The other constant that does this is None.)

For example, in python you can do this:

How to assign false to true in python

Totally legal. Not smart, but legal.

And that is totally legal in Python 2.x. (Python 3 wisely does not allow this!) I would not recommend doing this, as your co-workers will hunt you down to express their “displeasure”. But if you try that with __debug__ or None, you’ll get an error:

can't assign things to __debug__ or None

The only two “true” constants in python

Normally I’m not a fan of these types of language tricks, but this looked pretty cool to me and I thought I would share. I do find it interesting that in Python 3 they have True and False locked down to be true constants. Honestly, I thought there would be more language keywords that would be protected like that.

 

Improving your python: using pylint and flake8 in emacs

python using pylint flake8 in emacsIn a previous post I mentioned an issue I had with some python code that failed in a way I hadn’t expected.

Long story short, I was in the wrong which does happen from time to time. Aside from the actual bug, there was another failure: I thought my tools were checking my work. It turned out this was not the case!

I’ve been a big fan of flake8 for some time. Its ability to find PEP8 issues is a big help, and most of the time I find that it tends to root out problem code before it gets too bad. When I was dealing with that bug I was using PyCharm. I’ve since started using emacs, but would that have made a difference?

It turns out if I had been using pylint I might have caught this issue. I’m placing the emphasis on might there because there’s a bit of overlap between flake8 and pylint that I wasn’t aware of. Let’s explore them a little bit more. Continue reading

Becoming a better programmer

Becoming a better programmer one step at a time

Navy SEALs jump from the ramp of a C-17 Globemaster III over Fort Picket Maneuver Training Center, Va. (Air Force photo by Staff Sgt. Brian Ferguson)

In the Navy SEALs they have a saying: Everyday you have to earn your trident. The trident is the symbol that sailors earn as they complete the training that makes them part of the elite SEALs. It is possible to lose one’s trident however. To prevent a behavior that might cause this, the SEALs remind themselves that everyday they have to “earn (the right to wear) their trident”.

As I spend more time in the programming world I have come to realize there is great wisdom in this approach. A good friend of mine once told me that “Experience and skills are expiring assets.” In other words, if you don’t use them, you loose them. Just because your job title has the word “Senior” in it, you don’t automatically get a pass. You need to earn that title every day.

So as a programmer, how can you earn your place? How can you improve who you were yesterday? What does it take to make sure you are becoming a better programmer? Here’s what I’ve been doing. Continue reading

On moving from Java into Python

Before coming to Python, I did a lot of work in Java. Java is pretty good language and From Java into Pythonenvironment, but it is different than Python. Beyond the language syntax there’s a ton little differences to be aware of. Sometimes when we move from Java into Python it shows by some of the things that we do.

Here’s some things I’ve learned over the years, (or things that I’ve stubbed my toes on recently). Continue reading