Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests, vend queue #6

Closed
wants to merge 1 commit into from
Closed

Add tests, vend queue #6

wants to merge 1 commit into from

Conversation

abrandemuehl
Copy link
Contributor

This adds a queue for vends, so vend() can be called as many times as needed, and the vends will all happen.

In addition, there are some unittests included that are specific to our hat configuration. If the hat changes pins, we'll have to change the tests

Resolves #2

@narendasan
Copy link
Member

Is this a per user queue or a general queue?

@abrandemuehl
Copy link
Contributor Author

This is a general queue.

If we want a per user queue, it would be easy to batch requests together, but doing something more complex may take some work

@narendasan
Copy link
Member

Yeah I guess, making the machine controller as simple as possible is probably a good idea, we cam implement the batching functionality on the service

@narendasan
Copy link
Member

cc @sameetandpotatoes

Adds unittests for the vending functions, makes vends get queued into a
workthread, making them nonblocking

Resolves #2
@sameetandpotatoes
Copy link
Member

So when #8 gets merged, this PR can get rebased and also ready to test? I think it seems pretty good.

I agree, a general queue would be best, since we already decided to support an array of items being passed in. Every time merch calls /vend, it makes one request containing a transaction of items that the user has requested, which is basically handling it service side already

@abrandemuehl
Copy link
Contributor Author

I think this one will need a bit of work before it's merged. The problem with it is that the queue vends asynchonously, so we'll lose the feedback of whether a vend failed or not.

If this goes through, either we'd have to have the request handler wait for vends to finish, or we'd have to make the machine ping the service whenever a vend finishes successfully.

@narendasan
Copy link
Member

Can you not use a callback?

@abrandemuehl
Copy link
Contributor Author

I can, but it requires the pi to send messages back, which will be more complex for the service.

It would work, but we need some endpoints on the service for sending stuff back

@narendasan
Copy link
Member

Most frameworks have a callback mechanism iirc, just send an update in the response to the request

@abrandemuehl
Copy link
Contributor Author

Do you mean to just wait until the vend finishes to send the http response?

@narendasan
Copy link
Member

yeah, with device side queuing it may be a bit more complex, but if the queue is maintained by the server, just wait for the vend to complete and return the response

@abrandemuehl
Copy link
Contributor Author

If we're waiting for the vends to finish before responding, this pull request doesn't add anything over #8. In #8, the vends are "queued" in that they are done in the order that they are received in a for loop, blocking each time to wait for the previous one to finish

@narendasan
Copy link
Member

I think the real question is who should manage the queue. If the service maintains a some sort of abstracted queue and makes a request and waits for a callback I think that makes device code simpler, otherwise you need to have some sort of barrier to synchronize orders and execution (so looking a changing to a stream based protocol)

@narendasan
Copy link
Member

Tbh all I want is to be able to ask the machine to vend a space and figure out if it worked or not and let the service implement any more complicated functionality.

@narendasan
Copy link
Member

Will allow us to iterate faster as the role of the Pi is just to abstract control of the machine to some API

@abrandemuehl
Copy link
Contributor Author

If that is the goal that we want, (I agree that it'd be better to have a machine with a stable functionality API), then I'll close this PR, since it puts too much into the PI, and would complicate the API

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants