| Age | Commit message (Collapse) | Author | 
|---|
|  | Like what I did in edf6fceedd9b4169ceb63172c60733ef84d78951 for 500
errors, extract these errors to functions also.
Doesn't give us any gains in terms of reusability like it did before, as
we're only responding with each of these errors once, but it does clean
up the code in the `main()` function a bit. | 
|  | Clean up the `main()` function by extracting all these similar lines to
a function. | 
|  | Otherwise we get a borrow error:
    error[E0373]: closure may outlive the current function, but it borrows `pool`, which is owned by the current function
       --> src/main.rs:67:18
        |
    67  |     fastcgi::run(|mut req| {
        |                  ^^^^^^^^^ may outlive borrowed value `pool`
    ...
    123 |                 let mut cx = match pool.get_conn() {
        |                                    ---- `pool` is borrowed here
    help: to force the closure to take ownership of `pool` (and any other referenced variables), use the `move` keyword
        |
    67  |     fastcgi::run(move |mut req| {
        |                  ^^^^^^^^^^^^^^ | 
|  | This way we can ask the pool for a connection on each request instead of
trying to reuse a single connection. | 
|  | This currently errors on a borrow problem with the `cx` in the closure.
Here we get the purchaser name and email from the POST params and insert
them as a record in the database.
If all goes well, we respond with a 200. Otherwise we log errors and
respond with 500. | 
|  | Move the call to `params::parse()` from `request::verified()` into
`main()`. This enables us to access values from POST params inside the
`main()` function. We'll need this to store purchaser name and email
address. | 
|  |  | 
|  | Log incomming requests to the program's log file.
Remove the 500 error when failing to read stdin to a string. I think it
should be safe to ignore that error. Now that I think about it, we
should be logging it though. | 
|  | Previously we were responding with a 200 if all else checked out. This
seems too permissive. Only the authorised webhook requester should
receive a 200. All other requesters should be denied access. Swap the
last two responses to reflect this. | 
|  | `run` target runs a `lighttpd` server and watches the executable for
updates with `entr`. | 
|  | * If no `REQUEST_METHOD` is found, send a 500 error
* If the `REQUEST_METHOD` is not "POST", send a 405
* If POST params could not be read from stdin, send 500
* If an error occurred during request verification, send 500
* If the request didn't pass verification, send 403
* Otherwise send 200 | 
|  | Make it easier on users by not requiring them to pass a signature into
the method. This means they don't have to extract the `p_signature`
param and base64 decode it themselves.
Essentially, we want to move the code from `request` that removes the
`p_signature` key and base64 decodes it into the
`paddle::verify_signature()` function.
We need to make the string-like type params in `verify_signature()`
conform additionally to `PartialEq<str>` and `PartialOrd`. Doing so
allows us to find the key "p_signature".
To remove the `p_signature` param from the iterator, we partition it
into two iterators: one for the `p_signature` entry, and another for the
rest. We then extract the value of `p_signature` and base64 decode it
for verification.
Add a new error type in case no `p_signature` entry is found in the
iterator. | 
|  | I think I was doing it in the wrong direction. Previously, I had added
the signature from the POST param to the verifier, and verified against
the serialized params.
Seems like I was instead supposed to add the serialized params to the
verifier, and verify against the input signature.
It works correctly now against a request from Paddle. | 
|  | In the POST param, the signature is a base64 string, but when we verify
it, it needs to be decoded to bytes. | 
|  | In order to verify the signature, it needs to be encoded as bytes. | 
|  | Before it only used `%H:%M:%S`. We need a date. Use
> %+	2001-07-08T00:34:60.026490+09:30	ISO 8601 / RFC 3339 date & time format.
(https://docs.rs/chrono/0.4.0/chrono/format/strftime/index.html#specifiers) | 
|  | Stop writing this information to the response text and instead put it in
the program log file. Don't want to send back unnecessary information
when testing the Paddle webhook. | 
|  | Return a `Result` from the function to pass errors through. | 
|  | Use `AsRef<str>` instead of `&str` to offer a more flexible interface.
We need this because `url::form_urlencoded::parse()` gives us an
iterator of `(Cow<_, str>, Cow<_, str>)`, and we want to pass that into
`verify_signature()`.
Also change `key.len()` and `value.len()` to `.chars().count()` because
I was having a hard time getting the `len()` method from a trait (`str`
doesn't implement `ExactSizeIterator`), and I learned this:
> This length is in bytes, not chars or graphemes. In other words, it
> may not be what a human considers the length of the string.
(https://doc.rust-lang.org/std/primitive.str.html#method.len)
Also:
https://stackoverflow.com/questions/46290655/get-the-string-length-in-characters-in-rust/46290728#46290728
I assume the PHP serializer uses character count instead of byte length. | 
|  | The new `request::verified()` takes POST params as a string as does all
the work needed to call `paddle::verify_signature()`.
This involves extracting the `p_signature` POST parameter to get the
signature, and getting the public key PEM.
Change `params::parse()` to return a
`BTreeMap<Cow<'a, str>, Cow<'a, str>>` instead of `String` keys &
values. This is because `paddle::verify_signature()` needs a `(<&str,
&str)` iterator. Actually, it still doesn't solve the problem because
the types don't match. We need to modify the input type of
`verify_signature()`, but at least this change gives us references.
Make `params` private to the crate because we no longer need to use it
in `main()`. | 
|  | We want a dictionary to be able to remove the Paddle `p_signature` entry. | 
|  | Otherwise it doesn't indicate the name of the environment variable in
the result output:
    Error: Error(EnvVar(NotPresent), State { next_error: None, backtrace: InternalBacktrace { backtrace: None } }) | 
|  | I had forgotten to commit the transaction, so the record I was trying to
insert wouldn't get persisted in the database. | 
|  | Manually check that our purchaser creation and database persistence
works.
Make `purchaser` entities public in order to call them. | 
|  | Realised that when we want a new purchaser, we always want to generate a
secret. This way we can call `new()` without having to call
`generate_secret()` at the call site. | 
|  |  | 
|  | Get a file path from the `LOG_FILE` environment variable and use it for
log output.
Call `database::get_database_connection()` and log the error if it
fails.
Worried about exiting early from the FastCGI program, as DreamHost says
this causes problems
(https://help.dreamhost.com/hc/en-us/articles/217298967). But I don't
see how the program can continue without a database connection.
Return a `Result` from `main()` because it's easier to use the `?`
operator for errors that happen before logging is initialised. | 
|  | Function to establish a database connection using a connection pool.
Update `Purchaser::insert()` to take a `PooledConn` instead of a simple
`Conn`. | 
|  | Add a new variable for the regular database URL, and move the existing
one, which includes the `tcp(hostname)` format to `GO_DATABASE_URL`. We
need to keep the existing one for use with the 'migrate' command, but I
want a regular database URL to be able to use inside the main web
program. | 
|  | Split up the code to get things a bit more organised. I want a function
to create a connection to the MySQL database and I don't want to lump it
in with the rest. | 
|  | I don't see myself using this since I have the `generate_secret()`
function now. | 
|  | The `random()` function I was using will sample a value from "the full
representable range"
(https://docs.rs/rand/0.5.5/rand/#the-two-step-process-to-get-a-random-value).
We should really be using longer numbers, so set the sample range to
integers above and including 1 billion. | 
|  | This new method generates a secret, which is a SHA1 digest of the
purchaser's name, email, and a random integer.
In order to use the `hexdigest()` method in the 'sha1' crate, I needed
to add the feature `std`
(https://docs.rs/sha1/0.6.0/sha1/struct.Sha1.html#method.hexdigest).
Needed to change the `secret` field to a `String` because otherwise the
generated digest string doesn't have a long enough lifetime to assign to
it.
Update `with_secret()` to use the new `String` type.
Update `insert()` to correctly handle the `Option` in `secret`. | 
|  | Haven't tested this at all so I have no idea if it works. Just getting a
draft committed. | 
|  | No longer used after switching migration runners. | 
|  | Starting to set up database interactions. We need a way to insert
purchasers into the database. | 
|  |  | 
|  |  | 
|  | Add a script to install dependencies and perform initial application
setup.
Another script sources the environment file. | 
|  | Running
    $ mysql -u user dome_key < 20181109031633_create_purchasers.up.sql
worked just fine, but running the migration through 'migrate' produced
this error:
    (details: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'DELIMITER $$
    CREATE TRIGGER purchasers_updated_at
    BEFORE UPDATE
    ON purchasers FO' at line 1)
Probably the same problem that 'migrant' had, but it just didn't give me
an error message.
After having to fiddle and mess around with different parts of the
migration for a while, it finally turned out that the `DELIMITER` seemed
to be the problem.
Got rid of it as well as the `BEGIN`/`END` that depended on it. Looks
like our timestamp updater still works even without `BEGIN`, though, so
that's good. When I had first written the code, I figured I should use
`BEGIN` because that was how it was written in a couple examples I had
seen. | 
|  | Switched migration runners from https://crates.io/crates/migrant to
https://github.com/golang-migrate/migrate . | 
|  | Wrap all `migrate` subcommands. Turns out we do need the `-path`
argument for the other commands. Doesn't appear to work correctly for
`create`, but for the others it works fine. | 
|  | Wraps the `migrate` command (https://github.com/golang-migrate/migrate).
The command requires you to pass in a bunch of shell options that should
be constant for an application.
Encode those constants in this wrapper function. Now all you need to do
is run:
    $ migrate migration_name
and the correct migrations will be generated.
The migrations get created in the current directory. I tried passing
    -source file://migrations
,
    -source file://./migrations
, and
    -path migrations
but no combination seemed to put the generated migration files in the
migrations/ directory. Settled on moving the files manually in the
helper function. | 
|  | A database table to hold purchaser information so we can re-generate
licenses in case purchasers lose their license keys.
Needed to use a trigger to update the `updated_at` field on `UPDATE` as
you can't do `ON UPDATE UTC_TIMESTAMP()` in the column definition
> You can not specify UTC_TIMESTAMP as default to specify automatic
> properties
(https://dba.stackexchange.com/questions/20217/mysql-set-utc-time-as-default-timestamp/21619#21619)
Weirdly, the trigger isn't working when applying the migration with
    $ migrant apply
but it is working when running
    $ mysql -u user dome_key < migrations/20181108234653_create-purchasers/up.sql
Not sure what's going on there, but 'migrant' appears to have trouble
realising there are errors coming back from MySQL and executes the
migrations regardless. It also doesn't print syntax error messages from
MySQL which is very inconvenient. Migrant seemed to be the most advanced
migration CLI on crates.io, and I was hoping to use a Rust program for
this, but for simplicity, I'm thinking I'll have to go with a different
migration runner. Considering https://github.com/golang-migrate/migrate. | 
|  | Use 'migrant' for migrations (https://crates.io/crates/migrant).
TOML config file generated and modified from `migrant init`.
Database connection parameters should be passed in though environment
variables, hence the '.env.sample'. | 
|  | Allows us to test the FastCGI script locally.
Thanks to this article for describing how to set up a local FastCGI
server with Lighttpd:
http://yaikhom.com/2014/12/18/handling-requests-c-fast-cgi-and-lighttpd.html
Found out how to get the current config directory using `var.CWD` from:
https://stackoverflow.com/questions/11702989/lighttpd-conf-document-root-as-directory-containing-config-file/12435777#12435777 | 
|  | As these crates are only used in tests, move them to the
`dev-dependencies` section, and add `cfg(test)` to 'base64'. | 
|  | In order to properly verify the signature, dictionary entries must be
serialized in sorted order. Seems simpler to put the onus on the caller
to ensure the entries can be sorted rather than having to deal with that
myself. | 
|  | Hoping this is how to set up the verifier to verify the signature. | 
|  | Not sure if this works yet as I haven't tested it, but it follows most
of the examples in various languages on:
https://paddle.com/docs/reference-verifying-webhooks/
Just need to add in the comparison to the input signature. |