commits-by "phil"


Fix github image rendering uniphil/commit--blog

Plus some other more mechanical improvments:

  1. Sanitize inputs with bleach before running markdown. We should be able to trust the tags generated by python-markdown, and this avoids accidentally stripping useful attributes. Also feels better to be running pre-cleaned user input into the renderer.

  2. Use bleach.linkify instead of the mdx_* plugin. Unfortunately it needs to be run after markdown rendering, since it can mangle eg., markdown image tags.

  3. Replace broken regex hack for github images with a little python-markdown extension. The commit-ish is needed for generating raw.githubusercontent urls, so that's now an additional required piece of context.


Revert to markdown-then-bleach

which i guess makes sense– bleach should be the very last thing to transform before presenting the content for display β€” and possibly should not be applied except immediately before inserting into a trusted html context. Other sanitizers may be necessary for other contexts.

swap out the whole element positioning scheme uniphil/dom-destroyer

and destroy the last locked target, not the click target

i don't remember why this didn't use getClientBoundingRect in the first place, since it seems to work pretty well? is this old enough that that api was still too new to rely on? :p

also adds a bit of transition flair that will probably get annoying.

targetting of fixed elements now works properly when the document is scrolled.

Preview and save markdown rerenders uniphil/commit--blog

Since the config for markdown rendering is likely to change in the future, this adds detection of posts that have been rendered with previous configurations, with the option to preview a rerender under the new config.

If the new render result is identical, the rerender and version bump is auto- committed. If it differs, a preview is shown where it can either be committed or left as-is.

Contains hacks; :shrug:

Try out python-markdown for rendering uniphil/commit--blog

Opt in by adding ?rr=1 to the blog list, blog post, or rss feed endpoints.

If this seems to work, the next step will involve a database migration 😱. I dunno if i've ever done one with this project? πŸ˜…

My current thought is to add a markdown_version column to the posts table, so that posts rendered with old configurations are detectable. I don't want to keep multiple copies / versions around, but I think it's useful not to re-render and potential change posts that were already blogged in the past.

When an author views an old post, we can offer to preview it with the current render config -- if they accept, it updates with no take-backs. But otherwise it just stays the same.

For a while I was thinking that maybe the markdown rendering could be pushed entirely out of the application entirely and into a front-end/middleware cache layer, but I... don't any more. Some nice things can happen if the as-posted render is treated as first-class data.

Resolve library warnings uniphil/commit--blog

tbh i thought this was going to be harder :)

  • never used the flask-sqlalchemy event system, so set SQLALCHEMY_TRACK_MODIFICATIONS to False

  • i thought the CSRFProtection from flask_wtf was perhaps moving to a different package based on its warning message, but it really was just a rename of the same import. Woo!

FlaskWTFDeprecationWarning: "flask_wtf.CsrfProtect" has been renamed to "CSRFProtect" and will be removed in 1.0.

Bring back the github api for adding commits uniphil/commit--blog

The new git-based one is still there, but it's only activated by opting in via the manual commit form. It needs a bit more performance work to ensure that it won't crash and leave folks hanging when committing from large repositories.

Use dulwich without subprocessing uniphil/commit--blog

Still unsure of the cuase of the hang with the previous configuration, but this seems to work ok.

Unfortunately it drops the timeout, but I think gunicorn will still try to kill it if a repo takes up more than the 30 seconds a worker has to respond. Hopefully it doesn't get stuck after that point.

Tried adding gevent since these workers def should be async, but it takes a while to compile from source, since this build is weird thanks to dulwich.

Thinking of using the Futures / coro threaded worker type instead. Need to read more about it first though.

Merge branch 'https' uniphil/commit--blog

secure! at last!

who knew that getting a reasonable letsencrypt wildcard subdomain would be such a hassle. i ended up migrating off the free heroku service that has kept this site online for well over half a decade, which has given me some time to think about how I can keep it online for the next half decade. designing for sustainable longevity instead of growth.

the site now runs on openbsd on a $3.50/mo lunanode vm. since lunanode runs on a credit system, I can load funds up-front. $100 gets me over two years of service at this rate. openbsd felt like a long-term low-churn option. i hope.

cloudflare is back in the mix, but only for a separate domain for the letencrypt dns01 challenge -- nothing on the main domain goes through cloudflare at all. while having a separate domain for this doubles my domain costs, it feels like the best solution for now. is used for the actual cert renewal, and it supports cloudflare out of the box.

Alternatives considered:

  • caddy + easydns. this was my first choice, but after lookign into it, i found that the easydns api (which is still in beta) is not at a stage i'm comfortable deploying.

  • caddy + lunanode dns. i'm optimistic about lunanode, but i've never used their dns service before. their api seems relatively new, and i would need to write a custom caddy module to support it.

i took caddy out of the mix once I decided to deploy on openbsd, since relayd and httpd are really nice.

moving away from heroku also meant losing their managed postgres addon, so the database is now a lil sqlite3 file. that's what i use when i run it locally, and i'm not expecting any kind of sqlite-breaking growth.

i put it on a lunanode block volume, thinking that it would be nice for easy snapshotting backups and reattaching between VMs, but I don't think it was worth it. The lunanode attach/detach takes a bit of time (and a machine reboot for openbsd), and just moving the file around is easier and more efficient.

When I get around to taking it off the block storage, I'll set up a cron job to back up an encrypted copy to tarsnap or something.

the back-end runs gunicorn under a restricted user. deploys are automated though a git post-recieve hook on the server, and currently get logged with a timestamped git ref at . I briefly considered using github actions for the deploy, but felt that that idea didn't really fit with my target five-year sustainable project horizon.

i moved all the server customizations and setup scripts into a separate repo. it's not published yet, but i plan to publish it soon.

Revert "revert to http-only for now :(" uniphil/commit--blog

This reverts commit a77de8d.

Release v0.2 uniphil/dom-destroyer

whoa, wild, this all basically still worked? In both FF and Chrome???

Cleaned up a few bits, added a keyboard shortcut (ctrl+shift+L), etc.

I could keep cleaning, but 🀷 it all basically works. I think I fixed all the anxient bugs #2 and #3 too.

Upgrade dependencies uniphil/commit--blog

Also fix a small account deletion bug

tbh, ON DELETE CASCADE in the database on the FKs would be nicer, and, huh maybe actually postgres already did this and just sqlite wasn't?

shouldn't hurt to add this anyway.

Hotlink images from github repo refs uniphil/commit--blog

Hacky fix for #33, but seems to work? I actually think that this is a missing feature within github's GFM mode-- they do replace issue refs and username @s and some other bits contextually for the repo, but not images.

In the future, it would be nice to move off of relying on github's api to render the markdown, and then we can probably hook in more cleanly to fix up stuff like this.

Oh and I didn't bother fixing the <a> that github puts around the image, since I wanna replace this later anyway. Would happily merge a PR that handles that though :)

POC: i am an image

And, @riley's example from #33 should now work too! 🀞

hopefully this is the right error uniphil/timekeep

based on , it seems like there are some errors which tiny_http should just ignore?

Pass tests for #3778 uniphil/rust-clippy

This change guards the lint from checking newlines with a sort of complicated check to see if it's a raw string. Raw strings shouldn't be newline-checked, since r"\n" is literally -n, not a newline. I think it's ok not to check for literal newlines at the end of raw strings, but maybe that's debatable.

I... don't think this code is that great. I wanted to write the check after check_tts, but that was too late -- raw string type info is lost (or I couldn't find it). Putting it inside check_tts feels heavy-duty and the check itself feels like a brittle reach possibly into places it shouldn't.

Maybe someone can fix this up :)

bye ga hello timekeep uniphil/commit--blog

boo google tracking

Serve static files from root rust-lang-nursery/thanks

A bunch of special files served from the root path can do fancy stuff like:

  • provide icons (favicon.ico, apple-touch-icon.png, tile.png, ...)
  • control bots and stuff (robots.txt, humans.txt, ...)
  • describe the website structure (sitemap.xml)

This change makes adding these kinds of files simple: the whole public folder is served at the root, so they just go inside that folder and things Just Work. Most static files are served from sub-folders anyway, and now those paths are less of a mouthful.

It's basically nginx's try-files directive.

The obvious alternative would be to configure routes for each special static file that we want to serve from the route, which would just serve that file. This would have the advantage not touching the filesystem for every request to see if there's a static file we can serve.

I'm proposing it this way instead because

  1. OSes are smart, so fs checks on every request are probably cheap enough?
  2. Caching should probably prevent rust from serving most requests anyway
  3. Configuring routes and handlers (or a macro or something) seems painful and tedious and higher-barrier to getting stuff done.

Throw `new Error(msg)` for expect uniphil/results

...instead of just throwing the payload directly. This improves twot things:

  1. It's nicer to use. myThing.expect('should have blah') instead of myThing.expect(new Error('should have blah')).

  2. It saves overhead. Putting new Error(message) always creates the error, including generating the stacktrace and everything. Boo. I haven't actually measured it, but if that was hurting before, it shouldn't any more.

Implement Maybe.prototype.filter uniphil/results

Like Array.prototype.filter, given a test function, a passing Some(value) can pass through or be turned into a None() if test(value) is false-y.

I really wonder if there is a reason besides lack of motivation that other option types don't seem to implement this method. It has come up quite frequently on code I've worked with.

Fix posting uniphil/

haha, last push was just missing lots of stuff.

fixed the data model while I was there to link author-topic-post instead of

whatever wierd

topic-author-post L-----------J

it was before

fix controlled input state / cursor position uniphil/xxs

apparently removing/writing an input's attributes (id I guess?) resets the cursor position in FF, so now the virtual dom actually diffs the attributes

events are still removed/reattched everywhere for now.

Hack around a bug in chrome/v8's JS optimizer WorldBank-Transport/edudash

We were getting Uncaught TypeError: Cannot read property 'slice' of undefined from the _zoomEnd handler used internally by leaflet when the map was zoomed by any means, about 50% of the time.

The impact was rendering errors and semi-broken interactivity of the map. Zoom still worked but click-drag to pan did not.

Through debugging, it was determined that the cause is some issue in the javascript compiler's optimizer. Placing a debugger; statement would prevent the bug from occurring, but doing so also disables the optimizer for that function. The optimizer-bug hypothesis was validated by adding other code known to disable optimization of javascript functions: an empty try {} catch(e) {} made the bug go away, as did a with({}); statement anywhere in the function.

Finally, through experimentation it was discovered that simply moving the erroring expression into an inline anonymous function also made the bug go away. This is the fix contained in this commit, which monkey-patches itself into leaflet if it detects itself running on chrome.

It would be nice to reduce the original bug to a minimal setup that exhibits the issue in order to prepare a bug report for the chrome team, but this has yet to be completed.

Add Backbone uniphil/js-tools

There are amd-compatible versions of underscore and backbone available, which gulp will find by using the -amd suffix, as in bower.json:

  "dependencies": {
    "underscore-amd": null,
    "backbone-amd": null

That is the main notable thing about adding backbone. It is given a name in [app/scripts/main.js] just like jquery so that it can easily be required.

The structure of these changes is inspired by the same tuts+ tutorial as the last.

Use Bower to track dependencies uniphil/js-tools

.bowerrc configures the directory for bower to install dependencies, in this case app/scripts/vendor/. Bower will create that directory upon calling $ bower install from the project root.

Since we are now using bower to install dependencies including requirejs, we must update the path to require.js, to its new home in the vendor folder. This path is adjusted in the <script> tag in index.html.

The path to jquery is also updated in app.js.

Finally, the paths config in bower.json is updated so that jquery points to new install location from bower.

Inspiration for this setup was taken from a tuts+ template for setting up requirejs, backbone, and bower.

Add requirejs uniphil/js-tools

This commit roughly follows the getting started guide from RequireJS.

A single script tag in index.html loads requirejs and points it to the application entry-point, app.js.

app.js handles configuration for requirejs, and then simply requires the main module.

The paths config is used to specify the actual path to the jquery module, allowing other modules to simply require("jquery") by name.

main.js, finally, is the entry-point for application code. Later it might initialize a backbone View or something; for now it just requires jquery and a trivial test module, and calls a function from the module.

Store GeoJSON schools data in the DOM devgateway/map-kibera-schools

Originally, to keep the home page lightweight, no schools data was in the HTML. It was loaded onto the map via an AJAX request for a file, /schools.geojson.

This is slightly problematic because it means that 1) Users not using javascript have no path to access the schools pages. 2) Search engines may have a hard time finding the schools pages to index them.

Recently, a "Browse" dropdown was added to see a list of all schools. Writing that list into the HTML when generating the site makes sense, and solves both of the above problems.

Additionally, since most of the weight of the browse list in the HTML is the HTML itself, and the required additional info is minimal, this commit adds all of the data previously pulled via schools.geojson into the home page HTML.

After GZIP, the extra weight on the home page is small, since the markup and content are very repetitive. The resulting experience is faster, since no AJAX is necessary after loading the page any more, just parsing a bit of the DOM.

The actual dom parse step saves the data separately from the DOM, in GeoJSON format, so working with it on the map is simple.

Match lowercase subdomain against lowercase username uniphil/commit--blog

Fixes #30

There are two important parts in a URL for web servers to consider, which have slightly different rules:

  1. Subdomain: Everything after :// until the next /. The in
  2. Path: Everything after and including that / until a # if there is one. /mail/u/0/ for my inbox. The #inbox is not sent by the web browser to the server.

Subdomains have stricter rules on allowed characters and they are case-insensitive. Chromium will actually convert all alpha characters to lowercase before sending the request.

On commit--blog, each blogger gets a subdomain for their blog, and that subdomain comes from their username. Until this past Tuesday, everyone who I've convinced to sign up and try it out has had all-lowercase usernames. @pR0Ps finally bucked the trend and uncovered this issue.

The fix is simple: query the lowercased subdomain against lowercased usernames. Woo.

Wikipedia on URLs. There is a lot there. Always interesting to find out new things that seem very familiar.

Fix formatting of preview message bodies uniphil/commit--blog

Linewrapping is possbile in pre tags with the css rule white-space: pre-wrap.


fix #27

Expand to preview post bodies in dashboard uniphil/commit--blog

Fix #8

There is probably a memory leak in the event binding code in main.js. It's not a big deal here because there are always few posts, but it probably should be refactored.

Fix 500 on users with no name uniphil/commit--blog

Some users have usernames, but not names on GitHub (like @rileyjshaw used to).

On commit--blog, everywhere there should be a name, I do or user.username. If name is None, it'll fall back on their username, which must exist.

That pattern now pops up in a bunch of places so a better fix would be to add a method to the user object to just get the appropriate one. So this isn't a very good fix but I'm lazy.

This commit should fix #24, however. In that case a string was being concatenated to the username with the + operator, something like:

greeting = 'hello ' + or user.username

This doen't work at all when is None, because in python, the + operator has a higher precedence than the or operator. So it's effectively doing:

greeting = ('hello ' + or user.username

... which is clearly not the desired effect, and moreover string concatenation with None is illegal so it breaks.

The quick fix here is to force precedence with parens, like:

greeting = 'hello ' + ( or user.username)

though maybe string concatenation is a poor patern and I should have done string formatting. Whatever.

Add feeds (fix #1) uniphil/commit--blog

Thanks for the extra motivation @asadch

Add python2 support uniphil/altpass

The part I'm less confident in is that it swaps int.from_bytes out for ord, as from_bytes is a new python3 thing.

Could this have non-obvious implications? With int.from_bytes you get to pick the byteorder, but that shouldn't have any effect since it's used on exactly one byte from urandom.

Wrap methods instead of screwing with the class hierarchy uniphil/Windermere

wrap_file_field is a class decorator for handling file uploads, deletions, etc with a flask-admin sqla.ModelView. It wraps the ModelView methods for creation, updating, and deleting. ModelViews don't (and can't really) know anything about files, so this is a end-user convenience that makes everything work with a few config options passed into the decorator.

It's simple. For a model like this:

class Photo(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    description = db.Column(db.Text)
    photo = db.Column(db.String(200))  # this will be a filename

... you might make a Flask-Admin ModelView like this:

@wrap_file_field('photo', 'photos-folder', endpoint='uploaded_file', photo=True)
class PhotoView(sqla.ModelView):
    """Manage your nice photos whose files will be magically taken care of yay"""

And you're done (provided you've set up the right config variables for the uploads directory and stuff). Sweet.

However, the old version's implementation screwed with the class hierarchy a bit, and that caused problems. Particularly, it broke super():

@wrap_file_field('photo', 'photos-folder', endpoint='uploaded_file', photo=True)
class PhotoView(sqla.ModelView):
    """Manage your nice photos whose files will be magically taken care of yay"""
    def get_list(self, *args, **kwargs)
        result = super(PhotoView, self).get_list(*args, **kwargs)  # infinite recursion?!?!
        # screw around with result...
        return result

The new version no longer subclasses the user's class, but instead mutates it. Users can use super as much as and however they like and it's all terrific.

Don't ask bloggers for any scopes uniphil/commit--blog

... a change which should have meant simply removing the scope=user,public_repo query from the authorize URL's parameters. Unfortunately, it's not quite that simple.

1. GitHub rejects all authenticated requests for users which have not granted any scopes.

Yes, even for public endpoints. It's weird. They give you an oauth bearer_token for the user after authenticating and everything, but even though the endpoins are public, 401 sorry.

Instead, you have to make requests with your GitHub app's client_id and client_secret in the URL parameters.

2. Making rauth handle app-authenticated requests is awkward.

I'd like to still use it because it takes care of the base url and stuff. But getting those tokens injected into the URL, not getting the (useless) bearer token injected is tricky.

So as a compromise this commit awkwardly makes some subclasses for both the requests Session class and the rauth OAuth2Session class to make everythin play nicely. Within the app there are two session concepts for github now: a user-specific authenticated one (worthless except for logging in and out) and a more general app session for everything else. Both are attached to the gh blueprint.

Whatever man. It works.


Because I'm a bad git user I also snuck in a bit more refactoring and a mixin to set an app-specific user-agent into this commit.

List signed-up bloggers uniphil/commit--blog

Prints out a list of all the bloggers registered on the platform ordered by the number of posts they've made.

The query itself is an ORM hack that works. I don't know if it's good as a SQL query or not, but for checking up on the two users currently registered on commit--blog it's just fine. Here is the SQL it generates

SELECT AS blogger_id, blogger.gh_id AS blogger_gh_id, blogger.username AS blogger_username, AS blogger_name, blogger.avatar_url AS blogger_avatar_url, blogger.access_token AS blogger_access_token
FROM blogger ORDER BY (SELECT count( = commit_post.blogger_id) AS count_1
FROM commit_post)

That ORDER BY (SELECT count... makes me nervous; it seems like it's doing a lot of work. Then again the python code felt like it would be a lot of work too:

bloggers = Blogger.query \

I'd like to spend some more time learning and writing SQL queries. Not that I ever work with enough data for my fumbling ORM hacks to matter, but I'd like to have a more clear picture of what is going on. Maybe some day I'll have enough data to play with for it to matter.

My impression of SQLAlchemy is that, given knowledge of how to write good queries, it makes expressing them pythonic and doesn't get in your way. Not that I have experience to know, but for now that's why I'm still using it, given how easy it makes getting data in and out.

Show blogger.username when is not available uniphil/commit--blog

So I promised my friend @rileyjshaw a link to this blogging platform yesterday, but I kept putting it off so that I could try to get it into a usable state first. Finally earlier this evening I sent him a link. I took a break, came back, and checked the logs to see if he'd signed in yet or not. Sure enough he had!

Excited, I went over to Riley's page, and was greated by my website as it would have greeted Riley:

$ commits-by "None"

No posts by None yet :(

Riley has not entered a name for his github account. Riley, if you felt saddened by that page you have only yourself to blame :)

It's fixed now.

Forget trying to make subfolders work in place of subdomains uniphil/commit--blog

I can't figure out how to control which URL will be chosen by Flask/Werkzeug in case more than one match. Flask tries to pick the match with the fewest variables, but here I'd like to use different URLs with the same number of variables (<subdomain> becomes part of the rule, like /subdomains/<variable>/ + rest_of_rule or something.

This would be great because while subdomains in production are awesome, they are a pain to deal with locally. My current solution is to put lots of garbage in my /etc/hosts file to send all the subdomains I might want to test with to, and then setting export SERVER_NAME=commit--blog.local:5000 in my environment. This kind of sucks. Lots to set up just to test the site.

Later I might try making a special @route decorator that will determine at run-time whether to build a subdomain rule or a development subfolder rule.

Set up for Heroku uniphil/commit--blog

Notably, psycopg2 and/or postgres are picky about types when filtering in queries. In particular, Blogger.gh_id is stored as a string in the database, while GitHub's responses provide it as in int, so it has to be casted to a string.

We could take GitHub's word for it and put ints in the database, but depending on data types from external sources feels sketchy

Keep it simple to start uniphil/commit--blog

Just use github's api for everything. It's awesome anyway.

Part of the original purpose of commit --blog was to play with libgit2 and specifically pygit2 because it is so damn cool.

For now that'll be pushed back on the for-later stack so that I can make the site work.