A Laravel queue connector to process jobs on successful database transactions
commits.
This connector is very similar to the “sync” connector with the difference that
jobs are executed after the database transaction has been committed instead of
instantly.
It is useful for example when sending notifications that cause that other processes
or third party applications read data from your database. When using database
transactions and sending notifications, with another queue connectors there is
no guarantee that this processes or third parties will find the data as you have
set it when you sent the notification as the transaction might not has been
committed yet. With this connector, the notifications will be sent on transaction
commit event when the database transaction level reaches “0”.
Installation
The preferred way to install this extension is through composer.
With Composer installed, you can then install the extension using the following commands:
The basic usage of this queue is like in the following example.
DB::transaction(function() {
// Do somethingdispatch(function() use ($model) {
$model->notify();
})->onConnection('transaction-commit');
});
Here, the job specified as callback will be delayed until the transaction is
committed.
Dispatching jobs on nested transactions
You can dispatch jobs to this queue inside nested transactions and the jobs will
be processed after all the transactions have been resolved and the commit has
been perfomed into the database.
class ProcessExample {
publicfunctionrun() {
DB::transaction(function() {
// Do something more$this->nestedRun();
});
}
publicfunctionnestedRun() {
DB::transaction(function() {
$model = newNotifiableExampleModel();
// This job will be fired when all the transactions have been commited.dispatch(function() use ($model) {
$model->notify();
})->onConnection('transaction-commit');
});
}
}
$command = newProcessExample();
$command->run();
In this example, the job is dispatched on the transaction created on nestedRun
method, but this method is called by the run method from inside another
transaction. The execution of the $model->notify() callback will be delayed
until all the transactions have been committed.
Multiple database connections
The queue driver will use the connection names defined in the database config
file in order to create different queues for each connection.
If you don’t specify the queue where to dispatch the job, the default queue will
be used and the queue will be processed when the default connection reaches the
transaction level of 0.
If you want to init a transaction in other database connection than the default
one, remember to specify the queue with the connection name on the dispatched
jobs to the transaction-commit-queue like in the following example.
DB::connection('other-connection')->transaction(function() {
// Do something$model = newNotifiableExampleModel();
dispatch(function() use ($model) {
$model->notify();
})->onConnection('transaction-commit')->onQueue('other-connection');
});
Testing
If you use a transaction rollback strategy for testing against the datatabase, you can
set the environment variable TRANSACTION_COMMIT_DISPATCH_INSTANTLY in order to dispatch
the jobs instantly instead of on transaction commit.
Further Considerations
If there isn’t any open transaction on the database connection, the job with
be fired instantly.
If a transaction is rolled back, all the pending jobs of the rolled back
connection will be discarded.
A tool like tldr should be as frictionless as possible to use and show the
output as fast as possible.
We think that tealdeer reaches these goals. We put together a (more or less)
reproducible benchmark that compiles a handful of clients from source and
measures the execution times on a cold disk cache. The benchmarking is run in a
Docker container using sharkdp’s hyperfine
(Dockerfile).
As you can see, tealdeer is one of the fastest of the tested clients.
However, we strive for useful features and code quality over raw performance,
even if that means that we don’t come out on top in this friendly competition.
That said, we are still optimizing the code, for example when the outfieldr
developers suggested to switch to a native TLS
implementation instead of the native libraries.
Development
Creating a debug build with logging enabled:
$ cargo build --features logging
Release build without logging:
$ cargo build --release
To enable the log output, set the RUST_LOG env variable:
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall
be dual licensed as above, without any additional terms or conditions.
Thanks to @severen for coming up with the name “tealdeer”!
This is my current professional website created with the Next.js framework and a Redis KV database. It is currently deployed on Vercel.
https://mark-valentino.vercel.app/
Features
Personal Project Listings
This website displays my personal coding projects in various ways with the data for each project coming from one JavaScript object. Projects can be displayed as:
a grid of cards
a table
individual project pages using Next.js dynamic routes
Projects Page
Projects can be toggled between being shown as a grid of cards or a table with one click.
Projects can be sorted from newest to oldest or oldest to newest.
Projects can be filtered via a search bar.
Individual Project Pages
Some projects have extra content not shown on card views, and this content is shown on the individual project pages. The actual code to the projects themselves that can be run in the browser are displayed in iframes.
Blog
My site has a blog with “infinite” scrolling to cut down on loading times. All blog posts have individual routes associated with them as well, which are created dynamically. Blog posts on the infinite scrolling view have the capability to have their unique dynamic link be shared by clicking a button to write to the user’s clipboard. Each blog post has a view counter which is made possible by an API route and a connection to a Redis database.
Contact Form
Users can send a message through the Contact Me page. Messages sent are stored in the database, along with the email associated and the date saved. There is form validation for the following properties:
User isn’t a robot with hCaptcha validation.
Email matches a valid pattern.
Email length is under 40 characters.
Message most likely isn’t gibberish (contains at least 4 most common English words out of a list over 100).
Message is under 1280 characters, but greater than 12.
Reading stored messages
Messages can be read by going to /admin and entering the number of messages you would like to read and the password. Messages displayed are ordered from newest to oldest, so if you request 1 message, you will get the latest message someone sent.
UIUX
Material UI and Material UI inspired components are used throughout the site.
The site is responsive and is compatible with mobile, small desktop, and large desktop resolutions.
Microsoft Clarity is embedded for user experience analysis.
Add a .env.development.local file if it doesn’t already exist.
Then create an account on Microsoft Clarity and copy your Clarity ID.
Add this line to the env file:
MS_CLARITY_ID="<your Clarity ID>"
Vercel Deployments
Add your Clarity ID to the list of environment variables with the variable name being MS_CLARITY_ID
Vercel Redis Integration for Local Development and Deployments
This is my current professional website created with the Next.js framework and a Redis KV database. It is currently deployed on Vercel.
https://mark-valentino.vercel.app/
Features
Personal Project Listings
This website displays my personal coding projects in various ways with the data for each project coming from one JavaScript object. Projects can be displayed as:
a grid of cards
a table
individual project pages using Next.js dynamic routes
Projects Page
Projects can be toggled between being shown as a grid of cards or a table with one click.
Projects can be sorted from newest to oldest or oldest to newest.
Projects can be filtered via a search bar.
Individual Project Pages
Some projects have extra content not shown on card views, and this content is shown on the individual project pages. The actual code to the projects themselves that can be run in the browser are displayed in iframes.
Blog
My site has a blog with “infinite” scrolling to cut down on loading times. All blog posts have individual routes associated with them as well, which are created dynamically. Blog posts on the infinite scrolling view have the capability to have their unique dynamic link be shared by clicking a button to write to the user’s clipboard. Each blog post has a view counter which is made possible by an API route and a connection to a Redis database.
Contact Form
Users can send a message through the Contact Me page. Messages sent are stored in the database, along with the email associated and the date saved. There is form validation for the following properties:
User isn’t a robot with hCaptcha validation.
Email matches a valid pattern.
Email length is under 40 characters.
Message most likely isn’t gibberish (contains at least 4 most common English words out of a list over 100).
Message is under 1280 characters, but greater than 12.
Reading stored messages
Messages can be read by going to /admin and entering the number of messages you would like to read and the password. Messages displayed are ordered from newest to oldest, so if you request 1 message, you will get the latest message someone sent.
UIUX
Material UI and Material UI inspired components are used throughout the site.
The site is responsive and is compatible with mobile, small desktop, and large desktop resolutions.
Microsoft Clarity is embedded for user experience analysis.
Add a .env.development.local file if it doesn’t already exist.
Then create an account on Microsoft Clarity and copy your Clarity ID.
Add this line to the env file:
MS_CLARITY_ID="<your Clarity ID>"
Vercel Deployments
Add your Clarity ID to the list of environment variables with the variable name being MS_CLARITY_ID
Vercel Redis Integration for Local Development and Deployments
Bienvenido a Adivina el Pokémon, un juego divertido y desafiante donde podrás poner a prueba tus conocimientos sobre los Pokémon. En este juego, tendrás que seleccionar un nivel y luego la aplicación te mostrará una silueta de un Pokémon, y tú tendrás que elegir cuál de las opciones es.
Este proyecto es para fines educativos y fue utilizado para aprender y practicar con ReactJS, ContextAPI, PokeAPI, Axios, Tailwindcss y Figma,etc.
Características
Selección de niveles de dificultad
Interfaz fácil de usar y atractiva
Tablero de puntuaciones para mantener un seguimiento de tus progresos
Base de datos actualizada de Pokémon
Cómo jugar
Selecciona un nivel de dificultad
Mira la silueta del Pokémon y elige la opción correcta de entre las opciones disponibles
Sigue jugando hasta que hayas adivinado todos los Pokémon o hasta que hayas perdido todas tus vidas
Comandos de ejecución
Para correr Adivina el Pokémon en tu entorno local, sigue estos comandos:
Arranca el servidor de desarrollo
npm run dev
Tecnologías utilizadas
Adivina el Pokémon está desarrollado con las siguientes tecnologías:
Vite
ReactJS
Custom Hooks
ContextAPI
PokeAPI
Axios
Tailwindcss
Figma
Licencia
Adivina el Pokémon está disponible bajo la licencia MIT.
Bakaláři server for faking authentication in the mobile app Bakaláři Online📱 (Apple, Google, Aptoide).
About 🤔
Most companies offering student discounts verify your eligibility by checking if you are authenticated in the Bakaláři Online mobile app. This project is a proof-of-concept implementation of all Bakaláři API endpoints required for the authentication.
Usage 💻
After installing the official app, you can choose between creating a new profile and using a one-time sign-in option. I recommend using the one-time sign-in option just for testing.
When filing in the information, use the URL of the dummy server as the server name. For example, in the picture below I used my demo server (https://dub.fosny.eu). I also entered “DISPLAY NAME” as username and “CLASS NAME” as the password.
After clicking the “Sign in” button, it will redirect you to the user page with your username as the student’s name and password as the student’s class.
NOTE: Clicking on any module except “Dokumenty” will open a blank page saying an error occurred as the dummy server responds with a 400 status code to any unknown request.
To change your student name or class name, if you
created a profile
Open left sidebar
Click on your student name to get to the “select profile” page
Press the edit button next to your profile
Change your profile name to your needs
used one-time sign-in: Sign out and log back in with different credentials
Docker/Podman compose 🚚
An example config file to expose a service at port 8080:
NOTE: Bakaláři Online requires HTTPS protocol and will fail with plain HTTP or a self-signed certificate. You have to put this service behind a proxy with a valid certificate. (I recommend using Let’s Encrypt🔒)
Parable Http is a minimalist Http library used to receive requests and send responses. It is not a full implementation, offering just-enough functionality.
addHeader(string $header, string $value): void – add single header
From the HasHeaders trait:
getHeader(string $header): ?string – get a single header by string, null if non-existing
getHeaders(): string[] – get all headers
From the HasStatusCode trait:
getStatusCode(): int – the status code to be sent (i.e. 200)
getStatusCodeText(): ?string – the status code text to be sent (i.e. OK)
setStatusCode(int $statusCode): void – set the status code
Dispatcher
dispatch(Response $response): void – dispatch a Response, sending all its content as set
dispatchAndTerminate(Response $response, int $exitCode = 0): void – dispatch a Response and terminate, i.e., ending program flow immediately afterwards
Contributing
Any suggestions, bug reports or general feedback is welcome. Use github issues and pull requests, or find me over at devvoh.com.
License
All Parable components are open-source software, licensed under the MIT license.
numpy (pip install numpy — or install Anaconda distribution)
Keras 1.2.0+, but less than 2.0 (pip install keras==1.2)
Theano or Tensorflow. The code is fully tested on Theano. (pip install theano)
Usage
While any run is going on, the results as well as the AI models will be saved in the ./results subfolder. For a complete run, five experiments for each method, use the following command (may take several hours depending on your machine):
./run.sh
NOTE: Because the state-shape is relatively small, the deep RL methods of this code run faster on CPU.
Alternatively, for a single run use the following commands:
--mode can be either of dqn, dqn+1, hra, hra+1, or all.
Demo
We have also provided the code to demo Tabular GVF/NO-GVF methods. You first need to train the model using one of the above commands (Tabular GVF or no-GVF) and then run the demo. For example,
Müsli is a flexible, fast, and generic binary serialization framework for
Rust, in the same vein as serde.
It provides a set of formats, each with its own well-documented
set of features and tradeoffs. Every byte-oriented serialization method
including escaped formats like musli::json has full #[no_std] support
with or without alloc. And a particularly neat component providing
low-level refreshingly simple zero-copy serialization.
Add the following to your Cargo.toml using the format you want
to use:
[dependencies]
musli = { version = "0.0.145", features = ["storage"] }
Design
The heavy lifting is done by the Encode and Decode derives which are
documented in the derives module.
Müsli operates based on the schema represented by the types which implement
these traits.
use musli::{Encode,Decode};#[derive(Encode,Decode)]structPerson{/* .. fields .. */}
Note by default a field is identified by its numerical index which
would change if they are re-ordered. Renaming fields and setting a default
naming policy can be done by configuring the derives.
The binary serialization formats provided aim to efficiently and accurately
encode every type and data structure available in Rust. Each format comes
with well-documented tradeoffs and aims to be fully memory safe
to use.
Internally we use the terms “encoding”, “encode”, and “decode” because it’s
distinct from serde‘s use of “serialization”, “serialize”, and
“deserialize” allowing for the clearer interoperability between the two
libraries. Encoding and decoding also has more of a “binary serialization”
vibe, which more closely reflects the focus of this framework.
Müsli is designed on similar principles as serde. Relying on Rust’s
powerful trait system to generate code which can largely be optimized away.
The end result should be very similar to handwritten, highly optimized code.
As an example of this, these two functions both produce the same assembly
(built with --release):
Müsli’s data model does not speak Rust. There are no
serialize_struct_variant methods which provides metadata about the type
being serialized. The Encoder and Decoder traits are agnostic on
this. Compatibility with Rust types is entirely handled using the Encode
and Decode derives in combination with modes.
We use GATs to provide easier to use abstractions. GATs were not
available when serde was designed.
Everything is a Decoder or Encoder. Field names are therefore
not limited to be strings or indexes, but can be named to arbitrary
types if needed.
Visitor are only used when needed. serdecompletely uses visitors
when deserializing and the corresponding method is treated as a “hint” to
the underlying format. The deserializer is then free to call any method on
the visitor depending on what the underlying format actually contains. In
Müsli, we swap this around. If the caller wants to decode an arbitrary type
it calls decode_any. The format can then either signal the appropriate
underlying type or call Visitor::visit_unknown telling the implementer
that it does not have access to type information.
We’ve invented moded encoding allowing the same Rust types
to be encoded in many different ways with much greater control over how
things encoded. By default we include the Binary and Text modes
providing sensible defaults for binary and text-based formats.
We support detailed tracing when decoding for much improved
diagnostics of where something went wrong.
Formats
Formats are currently distinguished by supporting various degrees of
upgrade stability. A fully upgrade stable encoding format must tolerate
that one model can add fields that an older version of the model should be
capable of ignoring.
Partial upgrade stability can still be useful as is the case of the
musli::storage format below, because reading from storage only requires
decoding to be upgrade stable. So if correctly managed with
#[musli(default)] this will never result in any readers seeing unknown
fields.
reorder determines whether fields must occur in exactly the order in which
they are specified in their type. Reordering fields in such a type would
cause unknown but safe behavior of some kind. This is only suitable for
communication where the data models of each client are strictly
synchronized.
missing determines if reading can handle missing fields through something
like Option<T>. This is suitable for on-disk storage, because it means
that new optional fields can be added as the schema evolves.
unknown determines if the format can skip over unknown fields. This is
suitable for network communication. At this point you’ve reached upgrade
stability. Some level of introspection is possible
here, because the serialized format must contain enough information about
fields to know what to skip which usually allows for reasoning about basic
types.
self determines if the format is self-descriptive. Allowing the structure
of the data to be fully reconstructed from its serialized state. These
formats do not require models to decode and can be converted to and from
dynamic containers such as musli::value for introspection. Such formats
also allows for type-coercions to be performed, so that a signed number can
be correctly read as an unsigned number if it fits in the destination type.
For every feature you drop, the format becomes more compact and efficient.
musli::storage using #[musli(packed)] for example is roughly as compact
as bincode while musli::wire is comparable in size to something like
protobuf. All formats are primarily byte-oriented, but some might
perform bit packing if the benefits are obvious.
Upgrade stability
The following is an example of full upgrade stability using
musli::wire. Version1 can be decoded from an instance of Version2
because it understands how to skip fields which are part of Version2.
We’re also explicitly adding #[musli(name = ..)] to the fields to ensure
that they don’t change in case they are re-ordered.
use musli::{Encode,Decode};#[derive(Debug,PartialEq,Encode,Decode)]structVersion1{#[musli(Binary, name = 0)]name:String,}#[derive(Debug,PartialEq,Encode,Decode)]structVersion2{#[musli(Binary, name = 0)]name:String,#[musli(Binary, name = 1)]#[musli(default)]age:Option<u32>,}let version2 = musli::wire::to_vec(&Version2{name:String::from("Aristotle"),age:Some(61),})?;let version1:Version1 = musli::wire::decode(version2.as_slice())?;
The following is an example of partial upgrade stability using
musli::storage on the same data models. Note how Version2 can be
decoded from Version1 but not the other way around making it suitable
for on-disk storage where the schema can evolve from older to newer
versions.
let version2 = musli::storage::to_vec(&Version2{name:String::from("Aristotle"),age:Some(61),})?;assert!(musli::storage::decode::<_,Version1>(version2.as_slice()).is_err());let version1 = musli::storage::to_vec(&Version1{name:String::from("Aristotle"),})?;let version2:Version2 = musli::storage::decode(version1.as_slice())?;
Modes
In Müsli in contrast to serde the same model can be serialized in
different ways. Instead of requiring the use of distinct models we support
implementing different modes for a single model.
A mode is a type parameter, which allows for different attributes to apply
depending on which mode an encoder is configured to use. A mode can apply to
any musli attributes giving you a lot of flexibility.
If a mode is not specified, an implementation will apply to all modes (M),
if at least one mode is specified it will be implemented for all modes which
are present in a model and Binary and Text. This way, an encoding
which uses Binary or Text which are the default modes should always
work.
For more information on how to configure modes, see derives.
Below is a simple example of how we can use two modes to provide two
completely different formats using a single struct:
use musli::{Decode,Encode};use musli::json::Encoding;enumAlt{}#[derive(Decode,Encode)]#[musli(Text, name_all = "name")]#[musli(mode = Alt, packed)]structWord<'a>{text:&'astr,teineigo:bool,}constTEXT:Encoding = Encoding::new();constALT:Encoding<Alt> = Encoding::new().with_mode();let word = Word{text:"あります",teineigo:true,};let out = TEXT.to_string(&word)?;assert_eq!(out,r#"{"text":"あります","teineigo":true}"#);let out = ALT.to_string(&word)?;assert_eq!(out,r#"["あります",true]"#);
Going very fast
With the previous sections it should be apparent that speed is primarily a
game of tradeoffs. If we make every tradeoff in favor of speed Müsli is
designed to be the fastest framework out there.
The tradeoffs we will be showcasing to achieve speed here are:
Pre-allocate serialization space. This avoids all allocations during
serialization. The tradeoff is that if the data we are serializing
contains dynamically sized information which goes beyond the pre-allocated
space, we will error.
Use fixed-sized integers and floats. We use more space, but the cost of
serializing numerical fields essentially boils down to copying them.
Use a native byte order. With this we avoid any byte-swapping
operations. But our data becomes less portable.
Use a packed format. This doesn’t allow for any upgrades, but we avoid
paying the overhead of serializing field identifiers.
Use the Slice allocator. This avoids all heap allocations using the
global allocator. While the global allocator is quite efficient and
normally shouldn’t be avoided, the slice allocator is a fixed-slab
allocator. The tradeoff here is that we will error in case we run out of
memory, but we only need to use the allocator if the types being
serialized (or the format) demands it.
Disable error handling. Code generation will be able to remove
everything related to error handling, like allocations. To do this we can
make use of the default context without configuring it for tracing. If
an error happens, we are only informed of that fact through a zero-sized
marker type.
We also need some cooperation from the types being serialized since they
need to use the Packed mode we defined just above:
use musli::{Encode,Decode};#[derive(Encode,Decode)]#[musli(mode = Packed, packed)]structPerson{name:String,age:u32,}
Using the framework above also needs a bit of prep, namely the slice
allocator need to be initialized:
use musli::alloc::{ArrayBuffer,Slice};letmut buf = ArrayBuffer::new();let alloc = Slice::new(&mut buf);
That’s it! You are now using Müsli in the fastest possible mode. Feel free
to use it to “beat” any benchmarks. In fact, the musli_packed mode in our
internal benchmarks beat pretty much every framework with these methods.
My hope is that this should illustrate why you shouldn’t blindly trust
benchmarks. Sometimes code is not fully optimized, but most of the time
there is a tradeoff. If a benchmark doesn’t tell you what tradeoffs are
being made, don’t just naively trust a number.
Unsafety
This is a non-exhaustive list of unsafe use in this crate, and why they are
used:
A mem::transmute in Tag::kind. Which guarantees that converting into
the Kind enum which is #[repr(u8)] is as efficient as possible.
A largely unsafe SliceReader which provides more efficient reading than
the default Reader impl for &[u8] does. Since it can perform most of
the necessary comparisons directly on the pointers.
Some unsafety related to UTF-8 handling in musli::json, because we check
UTF-8 validity internally ourselves (like serde_json).
FixedBytes<N>, which is a stack-based container that can operate over
uninitialized data. Its implementation is largely unsafe. With it
stack-based serialization can be performed which is useful in no-std
environments.
Some unsafe is used for owned String decoding in all binary formats to
support faster string processing through simdutf8. Disabling the
simdutf8 feature (enabled by default) removes the use of this unsafe.
To ensure this library is correctly implemented with regards to memory
safety, extensive testing and fuzzing is performed using miri. See
tests for more information.
Footnotes
As in Müsli should be able to do everything you need and more. ↩
This is strictly not a binary serialization, but it was implemented
as a litmus test to ensure that Müsli has the necessary framework features
to support it. Luckily, the implementation is also quite good! ↩
mainnet-fork/: fork mainnet account states to test swap input/output estimates
each folder contains a corresponding README.md which explains how it works
dexs supported
serum
aldrin
saber
mercurial
orca
other notes
we use rust unit tests (with mainnet forking) to ensure our swap quotes are correct (quoted swap amount = actual swap amount)
to figure out how to interface with each dex (most dont have rust sdks or even public ts sdks) we reverse engineer how to interact with them through the jupiter-swap sdk (analyzing the npm package folder bc its also not public)
in the client we use a brute-force approach to find arb opportunities instead of negative cycle algos bc its much faster and can find all opportunities
we also dont calculate the optimal swap input amount for each arb bc its faster to spam multiple decreasing amounts and let the largest one land (this is what the winner of the arbitrage opportunities was doing – eg, send tx with input size N, N/2, N/4, …)
why do we need an on-chain swap program? checkout this post out — if you swap from A -> B -> C you might get more/less of B than expected which effects the swap instruction to C
why share this alpha
the life of a lone searcher is a lonely one where you cant share what you find or share your code – while working on this project i realized this is not what im about and thus i open source