Blog

  • laravel-transaction-commit-queue

    Laravel Transaction Commit Queue

    A Laravel queue connector to process jobs on successful database transactions
    commits.

    This connector is very similar to the “sync” connector with the difference that
    jobs are executed after the database transaction has been committed instead of
    instantly.

    It is useful for example when sending notifications that cause that other processes
    or third party applications read data from your database. When using database
    transactions and sending notifications, with another queue connectors there is
    no guarantee that this processes or third parties will find the data as you have
    set it when you sent the notification as the transaction might not has been
    committed yet. With this connector, the notifications will be sent on transaction
    commit event when the database transaction level reaches “0”.

    Installation

    The preferred way to install this extension is through composer.

    With Composer installed, you can then install the extension using the following commands:

    $ php composer.phar require jlorente/laravel-transaction-commit-queue

    or add

    ...
        "require": {
            "jlorente/laravel-transaction-commit-queue": "*"
        }

    to the require section of your composer.json file.

    Configuration

    Register the ServiceProvider in your config/app.php service provider list.

    config/app.php

    return [
        //other stuff
        'providers' => [
            //other stuff
            Jlorente\Laravel\Queue\TransactionCommit\TransactionCommitQueueServiceProvider::class,
        ];
    ];

    Then add the driver to the application config queue file.

    config\queue.php

    return [
        //other stuff
        'connections' => [
            //other stuff
            'transaction-commit' => [
                'driver' => 'transaction-commit',
            ],
        ],
    ];

    And publish the configuration file.

    $ php artisan vendor:publish --provider='Jlorente\Laravel\Queue\TransactionCommit\TransactionCommitQueueServiceProvider'

    Usage

    See the Laravel documentation to learn
    how to use jobs and queues.

    The basic usage of this queue is like in the following example.

    DB::transaction(function() {
        // Do something
    
        dispatch(function() use ($model) {
            $model->notify();
        })->onConnection('transaction-commit');
    });

    Here, the job specified as callback will be delayed until the transaction is
    committed.

    Dispatching jobs on nested transactions

    You can dispatch jobs to this queue inside nested transactions and the jobs will
    be processed after all the transactions have been resolved and the commit has
    been perfomed into the database.

    class ProcessExample {
        public function run() {
            DB::transaction(function() {
                // Do something more
    
                $this->nestedRun();
            });
        }
    
        public function nestedRun() {
            DB::transaction(function() {
                $model = new NotifiableExampleModel();
    
                // This job will be fired when all the transactions have been commited.
                dispatch(function() use ($model) {
                    $model->notify();
                })->onConnection('transaction-commit');
            });
        }
    }
    
    $command = new ProcessExample();
    $command->run();

    In this example, the job is dispatched on the transaction created on nestedRun
    method, but this method is called by the run method from inside another
    transaction. The execution of the $model->notify() callback will be delayed
    until all the transactions have been committed.

    Multiple database connections

    The queue driver will use the connection names defined in the database config
    file in order to create different queues for each connection.

    If you don’t specify the queue where to dispatch the job, the default queue will
    be used and the queue will be processed when the default connection reaches the
    transaction level of 0.

    If you want to init a transaction in other database connection than the default
    one, remember to specify the queue with the connection name on the dispatched
    jobs to the transaction-commit-queue like in the following example.

    DB::connection('other-connection')->transaction(function() {
        // Do something
        $model = new NotifiableExampleModel();
    
        dispatch(function() use ($model) {
            $model->notify();
        })->onConnection('transaction-commit')->onQueue('other-connection');
    });

    Testing

    If you use a transaction rollback strategy for testing against the datatabase, you can
    set the environment variable TRANSACTION_COMMIT_DISPATCH_INSTANTLY in order to dispatch
    the jobs instantly instead of on transaction commit.

    Further Considerations

    If there isn’t any open transaction on the database connection, the job with
    be fired instantly.

    If a transaction is rolled back, all the pending jobs of the rolled back
    connection will be discarded.

    Remember that notifications can
    also be enqueued.

    License

    Copyright © 2020 José Lorente Martín jose.lorente.martin@gmail.com.

    Licensed under the BSD 3-Clause License. See LICENSE.txt for details.

    Visit original content creator repository
    https://github.com/jlorente/laravel-transaction-commit-queue

  • tealdeer

    tealdeer

    teal deer

    Crate CI (Linux/macOS/Windows)
    Crates.io GitHub CI

    A very fast implementation of tldr in Rust: Simplified, example based and community-driven man pages.

    Screenshot of tldr command

    If you pronounce “tldr” in English, it sounds somewhat like “tealdeer”. Hence the project name 🙂

    In case you’re in a hurry and just want to quickly try tealdeer, you can find static binaries on the GitHub releases page!

    Docs (Installing, Usage, Configuration)

    User documentation is available at https://tealdeer-rs.github.io/tealdeer/!

    The docs are generated using mdbook. They can be edited through the markdown files in the docs/src/ directory.

    Goals

    High level project goals:

    • Download and cache pages
    • Don’t require a network connection for anything besides updating the cache
    • Command line interface similar or equivalent to the NodeJS client
    • Comply with the tldr client specification
    • Advanced highlighting and configuration
    • Be fast

    A tool like tldr should be as frictionless as possible to use and show the output as fast as possible.

    We think that tealdeer reaches these goals. We put together a (more or less) reproducible benchmark that compiles a handful of clients from source and measures the execution times on a cold disk cache. The benchmarking is run in a Docker container using sharkdp’s hyperfine (Dockerfile).

    Client (50 runs, 17.10.2021) Programming Language Mean in ms Deviation in ms Comments
    outfieldr Zig 9.1 0.5 no user configuration
    tealdeer Rust 13.2 0.5
    fast-tldr Haskell 17.0 0.6 no example highlighting
    tldr-hs Haskell 25.1 0.5 no example highlighting
    tldr-bash Bash 30.0 0.8
    tldr-c C 38.4 1.0
    tldr-python-client Python 87.0 2.4
    tldr-node-client JavaScript / NodeJS 407.1 12.9

    As you can see, tealdeer is one of the fastest of the tested clients. However, we strive for useful features and code quality over raw performance, even if that means that we don’t come out on top in this friendly competition. That said, we are still optimizing the code, for example when the outfieldr developers suggested to switch to a native TLS implementation instead of the native libraries.

    Development

    Creating a debug build with logging enabled:

    $ cargo build --features logging
    

    Release build without logging:

    $ cargo build --release
    

    To enable the log output, set the RUST_LOG env variable:

    $ export RUST_LOG=tldr=debug
    

    To run tests:

    $ cargo test
    

    To run lints:

    $ rustup component add clippy
    $ cargo clean && cargo clippy
    

    MSRV (Minimally Supported Rust Version)

    When publishing a tealdeer release, the Rust version required to build it should be stable for at least a month.

    License

    Licensed under either of

    Contribution

    Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

    Thanks to @severen for coming up with the name “tealdeer”!

    Visit original content creator repository https://github.com/tealdeer-rs/tealdeer
  • M-Valentino_NextJS

    Mark Valentino Next.js

    Vercel Status License: LGPL v2.1

    This is my current professional website created with the Next.js framework and a Redis KV database. It is currently deployed on Vercel. https://mark-valentino.vercel.app/

    Features

    Personal Project Listings

    This website displays my personal coding projects in various ways with the data for each project coming from one JavaScript object. Projects can be displayed as:

    • a grid of cards
    • a table
    • individual project pages using Next.js dynamic routes

    Projects Page

    • Projects can be toggled between being shown as a grid of cards or a table with one click.
    • Projects can be sorted from newest to oldest or oldest to newest.
    • Projects can be filtered via a search bar.

    Individual Project Pages

    Some projects have extra content not shown on card views, and this content is shown on the individual project pages. The actual code to the projects themselves that can be run in the browser are displayed in iframes.

    Blog

    My site has a blog with “infinite” scrolling to cut down on loading times. All blog posts have individual routes associated with them as well, which are created dynamically. Blog posts on the infinite scrolling view have the capability to have their unique dynamic link be shared by clicking a button to write to the user’s clipboard. Each blog post has a view counter which is made possible by an API route and a connection to a Redis database.

    Contact Form

    Users can send a message through the Contact Me page. Messages sent are stored in the database, along with the email associated and the date saved. There is form validation for the following properties:

    • User isn’t a robot with hCaptcha validation.
    • Email matches a valid pattern.
    • Email length is under 40 characters.
    • Message most likely isn’t gibberish (contains at least 4 most common English words out of a list over 100).
    • Message is under 1280 characters, but greater than 12.

    Reading stored messages

    Messages can be read by going to /admin and entering the number of messages you would like to read and the password. Messages displayed are ordered from newest to oldest, so if you request 1 message, you will get the latest message someone sent.

    UIUX

    • Material UI and Material UI inspired components are used throughout the site.
    • The site is responsive and is compatible with mobile, small desktop, and large desktop resolutions.
    • Microsoft Clarity is embedded for user experience analysis.
    • My site uses “Smart Links”. Read more about them here.

    Running

    For Getting Most Features Working Locally

    npm install
    npm run dev

    Open http://localhost:3000 with your browser to see the result.

    Connecting Microsoft Clarity

    Local development

    Add a .env.development.local file if it doesn’t already exist. Then create an account on Microsoft Clarity and copy your Clarity ID. Add this line to the env file:

    MS_CLARITY_ID="<your Clarity ID>"
    

    Vercel Deployments

    Add your Clarity ID to the list of environment variables with the variable name being MS_CLARITY_ID

    Vercel Redis Integration for Local Development and Deployments

    Follow this guide: https://vercel.com/guides/using-vercel-kv

    Connecting hCaptcha

    Create an account on hCaptcha and the domain the site will run on. Fill out the following environment variables on Vercel or on your local env:

    HCAPTCHA_SECRET
    HCAPTCHA_SITE_KEY
    NEXT_PUBLIC_HCAPTCHA_SITE_KEY
    

    Both HCAPTCHA_SITE_KEY and NEXT_PUBLIC_HCAPTCHA_SITE_KEY must have the same value.

    Reading messages on the admin page

    Add this environment variable on your Vercel project and choose your desired password:

    ADMIN_PASSWORD
    
    Visit original content creator repository https://github.com/M-Valentino/M-Valentino_NextJS
  • M-Valentino_NextJS

    Mark Valentino Next.js

    Vercel Status License: LGPL v2.1

    This is my current professional website created with the Next.js framework and a Redis KV database. It is currently deployed on Vercel. https://mark-valentino.vercel.app/

    Features

    Personal Project Listings

    This website displays my personal coding projects in various ways with the data for each project coming from one JavaScript object. Projects can be displayed as:

    • a grid of cards
    • a table
    • individual project pages using Next.js dynamic routes

    Projects Page

    • Projects can be toggled between being shown as a grid of cards or a table with one click.
    • Projects can be sorted from newest to oldest or oldest to newest.
    • Projects can be filtered via a search bar.

    Individual Project Pages

    Some projects have extra content not shown on card views, and this content is shown on the individual project pages. The actual code to the projects themselves that can be run in the browser are displayed in iframes.

    Blog

    My site has a blog with “infinite” scrolling to cut down on loading times. All blog posts have individual routes associated with them as well, which are created dynamically. Blog posts on the infinite scrolling view have the capability to have their unique dynamic link be shared by clicking a button to write to the user’s clipboard. Each blog post has a view counter which is made possible by an API route and a connection to a Redis database.

    Contact Form

    Users can send a message through the Contact Me page. Messages sent are stored in the database, along with the email associated and the date saved. There is form validation for the following properties:

    • User isn’t a robot with hCaptcha validation.
    • Email matches a valid pattern.
    • Email length is under 40 characters.
    • Message most likely isn’t gibberish (contains at least 4 most common English words out of a list over 100).
    • Message is under 1280 characters, but greater than 12.

    Reading stored messages

    Messages can be read by going to /admin and entering the number of messages you would like to read and the password. Messages displayed are ordered from newest to oldest, so if you request 1 message, you will get the latest message someone sent.

    UIUX

    • Material UI and Material UI inspired components are used throughout the site.
    • The site is responsive and is compatible with mobile, small desktop, and large desktop resolutions.
    • Microsoft Clarity is embedded for user experience analysis.
    • My site uses “Smart Links”. Read more about them here.

    Running

    For Getting Most Features Working Locally

    npm install
    npm run dev

    Open http://localhost:3000 with your browser to see the result.

    Connecting Microsoft Clarity

    Local development

    Add a .env.development.local file if it doesn’t already exist. Then create an account on Microsoft Clarity and copy your Clarity ID. Add this line to the env file:

    MS_CLARITY_ID="<your Clarity ID>"
    

    Vercel Deployments

    Add your Clarity ID to the list of environment variables with the variable name being MS_CLARITY_ID

    Vercel Redis Integration for Local Development and Deployments

    Follow this guide: https://vercel.com/guides/using-vercel-kv

    Connecting hCaptcha

    Create an account on hCaptcha and the domain the site will run on. Fill out the following environment variables on Vercel or on your local env:

    HCAPTCHA_SECRET
    HCAPTCHA_SITE_KEY
    NEXT_PUBLIC_HCAPTCHA_SITE_KEY
    

    Both HCAPTCHA_SITE_KEY and NEXT_PUBLIC_HCAPTCHA_SITE_KEY must have the same value.

    Reading messages on the admin page

    Add this environment variable on your Vercel project and choose your desired password:

    ADMIN_PASSWORD
    
    Visit original content creator repository https://github.com/M-Valentino/M-Valentino_NextJS
  • poke-game

    Adivina el Pokémon

    Bienvenido a Adivina el Pokémon, un juego divertido y desafiante donde podrás poner a prueba tus conocimientos sobre los Pokémon. En este juego, tendrás que seleccionar un nivel y luego la aplicación te mostrará una silueta de un Pokémon, y tú tendrás que elegir cuál de las opciones es.

    Este proyecto es para fines educativos y fue utilizado para aprender y practicar con ReactJS, ContextAPI, PokeAPI, Axios, Tailwindcss y Figma,etc.

    Características

    • Selección de niveles de dificultad
    • Interfaz fácil de usar y atractiva
    • Tablero de puntuaciones para mantener un seguimiento de tus progresos
    • Base de datos actualizada de Pokémon

    Cómo jugar

    1. Selecciona un nivel de dificultad
    2. Mira la silueta del Pokémon y elige la opción correcta de entre las opciones disponibles
    3. Sigue jugando hasta que hayas adivinado todos los Pokémon o hasta que hayas perdido todas tus vidas

    Comandos de ejecución

    Para correr Adivina el Pokémon en tu entorno local, sigue estos comandos:

    Arranca el servidor de desarrollo

    npm run dev

    Tecnologías utilizadas

    Adivina el Pokémon está desarrollado con las siguientes tecnologías:

    • Vite
    • ReactJS
    • Custom Hooks
    • ContextAPI
    • PokeAPI
    • Axios
    • Tailwindcss
    • Figma

    Licencia

    Adivina el Pokémon está disponible bajo la licencia MIT.

    Visit original content creator repository
    https://github.com/lChrisjm/poke-game

  • dummy-bakalari

    Dummy Bakaláři 🤖

    Bakaláři server for faking authentication in the mobile app Bakaláři Online📱 (Apple, Google, Aptoide).

    About 🤔

    Most companies offering student discounts verify your eligibility by checking if you are authenticated in the Bakaláři Online mobile app. This project is a proof-of-concept implementation of all Bakaláři API endpoints required for the authentication.

    Usage 💻

    After installing the official app, you can choose between creating a new profile and using a one-time sign-in option. I recommend using the one-time sign-in option just for testing.

    When filing in the information, use the URL of the dummy server as the server name. For example, in the picture below I used my demo server (https://dub.fosny.eu). I also entered “DISPLAY NAME” as username and “CLASS NAME” as the password.

    Screenshot of the mobile log-in screen with filled information

    After clicking the “Sign in” button, it will redirect you to the user page with your username as the student’s name and password as the student’s class.

    NOTE: Clicking on any module except “Dokumenty” will open a blank page saying an error occurred as the dummy server responds with a 400 status code to any unknown request.

    Screenshot of the mobile log-in screen with filled information

    To change your student name or class name, if you

    • created a profile
      1. Open left sidebar
      2. Click on your student name to get to the “select profile” page
      3. Press the edit button next to your profile
      4. Change your profile name to your needs
    • used one-time sign-in: Sign out and log back in with different credentials

    Docker/Podman compose 🚚

    An example config file to expose a service at port 8080:

    NOTE: Bakaláři Online requires HTTPS protocol and will fail with plain HTTP or a self-signed certificate. You have to put this service behind a proxy with a valid certificate. (I recommend using Let’s Encrypt🔒)

    version: "3"
    
    services:
      dummy-bakalari:
        build: https://gitlab.com/vfosnar/dummy-bakalari.git#main
        restart: always
        environment:
          - "APP_ADDRESS=:80"
        ports:
          - "8080:80"
    Visit original content creator repository https://github.com/vfosnar/dummy-bakalari
  • http

    Parable Http

    Workflow Status Latest Stable Version Latest Unstable Version License

    Parable Http is a minimalist Http library used to receive requests and send responses. It is not a full implementation, offering just-enough functionality.

    Install

    Php 8.0+ and composer are required.

    $ composer require parable-php/http

    Usage

    To create a Request object automatically from the server variables, use:

    $request = RequestFactory::createFromServer();

    To create a Request from scratch, use:

    $request = new Request(
        'GET', 
        'http://url.here/path?param=value'
    );

    To set up a minimal response you want to send to the client:

    $response = new Response(200, 'This is the body');

    And to send it, use the Dispatcher:

    $response = new Response(200, 'This is the body');
    $dispatcher = new Dispatcher();
    
    $dispatcher->dispatch($response);

    This will send a response with stat us code 200, with the body set as passed to the Response upon creation.

    API

    Request

    • getMethod(): string – returns GET, POST, etc.
    • getUri(): Uri – return a Uri object representing the uri being requested
    • getRequestUri(): ?string – the path of the Uri
    • getProtocol(): string – the protocol used (i.e. HTTP/1.1)
    • getProtocolVersion(): string – the version part of the protocol (i.e. 1.1)
    • getBody(): ?string – the body of the request, if any
    • getUser(): ?string – the username from the uri
    • getPass(): ?string – the password from the uri
    • isHttps(): bool – whether the request was made over https. This represents a ‘best guess’ based on multiple checks
    • isMethod(string $method): bool – check whether the method matches $method

    From the HasHeaders trait:

    • getHeader(string $header): ?string – get a single header by string, null if non-existing
    • getHeaders(): string[] – get all headers

    Response

    • getBody(): ?string – the body to be sent
    • setBody(string $body): void – set the body as a string
    • prependBody(string $content): void – prepend the value to the body
    • appendBody(string $content): void – append the value to the body
    • getContentType(): string – the content type (i.e. text/html, application/json)
    • setContentType(string $contentType): void – set the content type
    • getProtocol(): string – the protocol to be sent with (i.e. HTTP/1.1)
    • getProtocolVersion(): string – the protocol version (i.e. 1.1)
    • setProtocol(string $protocol): void – set the protocol
    • setHeaders(array $headers): void – set multiple headers, resetting
    • addHeaders(array $headers): void – add multiple headers
    • addHeader(string $header, string $value): void – add single header

    From the HasHeaders trait:

    • getHeader(string $header): ?string – get a single header by string, null if non-existing
    • getHeaders(): string[] – get all headers

    From the HasStatusCode trait:

    • getStatusCode(): int – the status code to be sent (i.e. 200)
    • getStatusCodeText(): ?string – the status code text to be sent (i.e. OK)
    • setStatusCode(int $statusCode): void – set the status code

    Dispatcher

    • dispatch(Response $response): void – dispatch a Response, sending all its content as set
    • dispatchAndTerminate(Response $response, int $exitCode = 0): void – dispatch a Response and terminate, i.e., ending program flow immediately afterwards

    Contributing

    Any suggestions, bug reports or general feedback is welcome. Use github issues and pull requests, or find me over at devvoh.com.

    License

    All Parable components are open-source software, licensed under the MIT license.

    Visit original content creator repository https://github.com/parable-php/http
  • hra

    Hybrid Reward Architecture

    This repository hosts the code published along with the following NIPS article (Experiment 4.1: Fruit Collection Task):

    For more information about this article, see the following blog posts:

    Dependencies

    We strongly suggest to use Anaconda distribution.

    • Python 3.5 or higher
    • pygame 1.9.2+ (pip install pygame)
    • click (pip install click)
    • numpy (pip install numpy — or install Anaconda distribution)
    • Keras 1.2.0+, but less than 2.0 (pip install keras==1.2)
    • Theano or Tensorflow. The code is fully tested on Theano. (pip install theano)

    Usage

    While any run is going on, the results as well as the AI models will be saved in the ./results subfolder. For a complete run, five experiments for each method, use the following command (may take several hours depending on your machine):

    ./run.sh
    
    • NOTE: Because the state-shape is relatively small, the deep RL methods of this code run faster on CPU.

    Alternatively, for a single run use the following commands:

    • Tabular GVF:
    ipython ./tabular/train.py -- -o use_gvf True -o folder_name tabular_gvf_ -o nb_experiments 1
    
    • Tabular no-GVF:
    ipython ./tabular/train.py -- -o use_gvf False -o folder_name tabular_no-gvf_ -o nb_experiments 1
    
    • DQN:
    THEANO_FLAG="device=cpu" ipython ./dqn/train.py -- --mode hra+1 -o nb_experiments 1
    
    • --mode can be either of dqn, dqn+1, hra, hra+1, or all.

    Demo

    We have also provided the code to demo Tabular GVF/NO-GVF methods. You first need to train the model using one of the above commands (Tabular GVF or no-GVF) and then run the demo. For example,

    ipython ./tabular/train.py -- -o use_gvf True -o folder_name tabular_gvf_ -o nb_experiments 1
    ipython ./tabular/train.py -- --demo -o folder_name tabular_gvf_
    

    If you would like to save the results, use the --save option:

    ipython ./tabular/train.py -- --demo --save -o folder_name tabular_gvf_
    

    The rendered images will be saved in ./render directory by default.

    License

    Please refer to LICENSE.txt.

    Visit original content creator repository
    https://github.com/Maluuba/hra

  • musli

    musli

    github crates.io docs.rs build status

    Excellent performance, no compromises1!

    Müsli is a flexible, fast, and generic binary serialization framework for Rust, in the same vein as serde.

    It provides a set of formats, each with its own well-documented set of features and tradeoffs. Every byte-oriented serialization method including escaped formats like musli::json has full #[no_std] support with or without alloc. And a particularly neat component providing low-level refreshingly simple zero-copy serialization.


    Overview


    Usage

    Add the following to your Cargo.toml using the format you want to use:

    [dependencies]
    musli = { version = "0.0.145", features = ["storage"] }

    Design

    The heavy lifting is done by the Encode and Decode derives which are documented in the derives module.

    Müsli operates based on the schema represented by the types which implement these traits.

    use musli::{Encode, Decode};
    
    #[derive(Encode, Decode)]
    struct Person {
        /* .. fields .. */
    }

    Note by default a field is identified by its numerical index which would change if they are re-ordered. Renaming fields and setting a default naming policy can be done by configuring the derives.

    The binary serialization formats provided aim to efficiently and accurately encode every type and data structure available in Rust. Each format comes with well-documented tradeoffs and aims to be fully memory safe to use.

    Internally we use the terms “encoding”, “encode”, and “decode” because it’s distinct from serde‘s use of “serialization”, “serialize”, and “deserialize” allowing for the clearer interoperability between the two libraries. Encoding and decoding also has more of a “binary serialization” vibe, which more closely reflects the focus of this framework.

    Müsli is designed on similar principles as serde. Relying on Rust’s powerful trait system to generate code which can largely be optimized away. The end result should be very similar to handwritten, highly optimized code.

    As an example of this, these two functions both produce the same assembly (built with --release):

    const OPTIONS: Options = options::new().fixed().native_byte_order().build();
    const ENCODING: Encoding<OPTIONS> = Encoding::new().with_options();
    
    #[derive(Encode, Decode)]
    #[musli(packed)]
    pub struct Storage {
        left: u32,
        right: u32,
    }
    
    fn with_musli(storage: &Storage) -> Result<[u8; 8]> {
        let mut array = [0; 8];
        ENCODING.encode(&mut array[..], storage)?;
        Ok(array)
    }
    
    fn without_musli(storage: &Storage) -> Result<[u8; 8]> {
        let mut array = [0; 8];
        array[..4].copy_from_slice(&storage.left.to_ne_bytes());
        array[4..].copy_from_slice(&storage.right.to_ne_bytes());
        Ok(array)
    }

    Müsli is different from serde

    Müsli’s data model does not speak Rust. There are no serialize_struct_variant methods which provides metadata about the type being serialized. The Encoder and Decoder traits are agnostic on this. Compatibility with Rust types is entirely handled using the Encode and Decode derives in combination with modes.

    We use GATs to provide easier to use abstractions. GATs were not available when serde was designed.

    Everything is a Decoder or Encoder. Field names are therefore not limited to be strings or indexes, but can be named to arbitrary types if needed.

    Visitor are only used when needed. serde completely uses visitors when deserializing and the corresponding method is treated as a “hint” to the underlying format. The deserializer is then free to call any method on the visitor depending on what the underlying format actually contains. In Müsli, we swap this around. If the caller wants to decode an arbitrary type it calls decode_any. The format can then either signal the appropriate underlying type or call Visitor::visit_unknown telling the implementer that it does not have access to type information.

    We’ve invented moded encoding allowing the same Rust types to be encoded in many different ways with much greater control over how things encoded. By default we include the Binary and Text modes providing sensible defaults for binary and text-based formats.

    Müsli fully supports no-std and no-alloc from the ground up without compromising on features using safe and efficient scoped allocations.

    We support detailed tracing when decoding for much improved diagnostics of where something went wrong.


    Formats

    Formats are currently distinguished by supporting various degrees of upgrade stability. A fully upgrade stable encoding format must tolerate that one model can add fields that an older version of the model should be capable of ignoring.

    Partial upgrade stability can still be useful as is the case of the musli::storage format below, because reading from storage only requires decoding to be upgrade stable. So if correctly managed with #[musli(default)] this will never result in any readers seeing unknown fields.

    The available formats and their capabilities are:

    reorder missing unknown self
    musli::packed (with #[musli(packed)])
    musli::storage
    musli::wire
    musli::descriptive
    musli::json 2

    reorder determines whether fields must occur in exactly the order in which they are specified in their type. Reordering fields in such a type would cause unknown but safe behavior of some kind. This is only suitable for communication where the data models of each client are strictly synchronized.

    missing determines if reading can handle missing fields through something like Option<T>. This is suitable for on-disk storage, because it means that new optional fields can be added as the schema evolves.

    unknown determines if the format can skip over unknown fields. This is suitable for network communication. At this point you’ve reached upgrade stability. Some level of introspection is possible here, because the serialized format must contain enough information about fields to know what to skip which usually allows for reasoning about basic types.

    self determines if the format is self-descriptive. Allowing the structure of the data to be fully reconstructed from its serialized state. These formats do not require models to decode and can be converted to and from dynamic containers such as musli::value for introspection. Such formats also allows for type-coercions to be performed, so that a signed number can be correctly read as an unsigned number if it fits in the destination type.

    For every feature you drop, the format becomes more compact and efficient. musli::storage using #[musli(packed)] for example is roughly as compact as bincode while musli::wire is comparable in size to something like protobuf. All formats are primarily byte-oriented, but some might perform bit packing if the benefits are obvious.


    Upgrade stability

    The following is an example of full upgrade stability using musli::wire. Version1 can be decoded from an instance of Version2 because it understands how to skip fields which are part of Version2. We’re also explicitly adding #[musli(name = ..)] to the fields to ensure that they don’t change in case they are re-ordered.

    use musli::{Encode, Decode};
    
    #[derive(Debug, PartialEq, Encode, Decode)]
    struct Version1 {
        #[musli(Binary, name = 0)]
        name: String,
    }
    
    #[derive(Debug, PartialEq, Encode, Decode)]
    struct Version2 {
        #[musli(Binary, name = 0)]
        name: String,
        #[musli(Binary, name = 1)]
        #[musli(default)]
        age: Option<u32>,
    }
    
    let version2 = musli::wire::to_vec(&Version2 {
        name: String::from("Aristotle"),
        age: Some(61),
    })?;
    
    let version1: Version1 = musli::wire::decode(version2.as_slice())?;

    The following is an example of partial upgrade stability using musli::storage on the same data models. Note how Version2 can be decoded from Version1 but not the other way around making it suitable for on-disk storage where the schema can evolve from older to newer versions.

    let version2 = musli::storage::to_vec(&Version2 {
        name: String::from("Aristotle"),
        age: Some(61),
    })?;
    
    assert!(musli::storage::decode::<_, Version1>(version2.as_slice()).is_err());
    
    let version1 = musli::storage::to_vec(&Version1 {
        name: String::from("Aristotle"),
    })?;
    
    let version2: Version2 = musli::storage::decode(version1.as_slice())?;

    Modes

    In Müsli in contrast to serde the same model can be serialized in different ways. Instead of requiring the use of distinct models we support implementing different modes for a single model.

    A mode is a type parameter, which allows for different attributes to apply depending on which mode an encoder is configured to use. A mode can apply to any musli attributes giving you a lot of flexibility.

    If a mode is not specified, an implementation will apply to all modes (M), if at least one mode is specified it will be implemented for all modes which are present in a model and Binary and Text. This way, an encoding which uses Binary or Text which are the default modes should always work.

    For more information on how to configure modes, see derives.

    Below is a simple example of how we can use two modes to provide two completely different formats using a single struct:

    use musli::{Decode, Encode};
    use musli::json::Encoding;
    
    enum Alt {}
    
    #[derive(Decode, Encode)]
    #[musli(Text, name_all = "name")]
    #[musli(mode = Alt, packed)]
    struct Word<'a> {
        text: &'a str,
        teineigo: bool,
    }
    
    const TEXT: Encoding = Encoding::new();
    const ALT: Encoding<Alt> = Encoding::new().with_mode();
    
    let word = Word {
        text: "あります",
        teineigo: true,
    };
    
    let out = TEXT.to_string(&word)?;
    assert_eq!(out, r#"{"text":"あります","teineigo":true}"#);
    
    let out = ALT.to_string(&word)?;
    assert_eq!(out, r#"["あります",true]"#);

    Going very fast

    With the previous sections it should be apparent that speed is primarily a game of tradeoffs. If we make every tradeoff in favor of speed Müsli is designed to be the fastest framework out there.

    The tradeoffs we will be showcasing to achieve speed here are:

    • Pre-allocate serialization space. This avoids all allocations during serialization. The tradeoff is that if the data we are serializing contains dynamically sized information which goes beyond the pre-allocated space, we will error.
    • Use fixed-sized integers and floats. We use more space, but the cost of serializing numerical fields essentially boils down to copying them.
    • Use a native byte order. With this we avoid any byte-swapping operations. But our data becomes less portable.
    • Use a packed format. This doesn’t allow for any upgrades, but we avoid paying the overhead of serializing field identifiers.
    • Use the Slice allocator. This avoids all heap allocations using the global allocator. While the global allocator is quite efficient and normally shouldn’t be avoided, the slice allocator is a fixed-slab allocator. The tradeoff here is that we will error in case we run out of memory, but we only need to use the allocator if the types being serialized (or the format) demands it.
    • Disable error handling. Code generation will be able to remove everything related to error handling, like allocations. To do this we can make use of the default context without configuring it for tracing. If an error happens, we are only informed of that fact through a zero-sized marker type.

    We achieve this through the following methods:

    use musli::alloc::{Allocator, Global};
    use musli::context::{self, ErrorMarker as Error};
    use musli::options::{self, Float, Integer, Width, Options};
    use musli::storage::Encoding;
    use musli::{Decode, Encode};
    use musli::alloc::Slice;
    
    enum Packed {}
    
    const OPTIONS: Options = options::new().fixed().native_byte_order().build();
    const ENCODING: Encoding<OPTIONS, Packed> = Encoding::new().with_options().with_mode();
    
    #[inline]
    pub fn encode<'buf, T, A>(buf: &'buf mut [u8], value: &T, alloc: A) -> Result<&'buf [u8], Error>
    where
        T: Encode<Packed>,
        A: Allocator,
    {
        let cx = context::new_in(alloc);
        let w = ENCODING.to_slice_with(&cx, &mut buf[..], value)?;
        Ok(&buf[..w])
    }
    
    #[inline]
    pub fn decode<'buf, T, A>(buf: &'buf [u8], alloc: A) -> Result<T, Error>
    where
        T: Decode<'buf, Packed, A>,
        A: Allocator,
    {
        let cx = context::new_in(alloc);
        ENCODING.from_slice_with(&cx, buf)
    }

    We also need some cooperation from the types being serialized since they need to use the Packed mode we defined just above:

    use musli::{Encode, Decode};
    
    #[derive(Encode, Decode)]
    #[musli(mode = Packed, packed)]
    struct Person {
        name: String,
        age: u32,
    }

    Using the framework above also needs a bit of prep, namely the slice allocator need to be initialized:

    use musli::alloc::{ArrayBuffer, Slice};
    
    let mut buf = ArrayBuffer::new();
    let alloc = Slice::new(&mut buf);

    That’s it! You are now using Müsli in the fastest possible mode. Feel free to use it to “beat” any benchmarks. In fact, the musli_packed mode in our internal benchmarks beat pretty much every framework with these methods.

    My hope is that this should illustrate why you shouldn’t blindly trust benchmarks. Sometimes code is not fully optimized, but most of the time there is a tradeoff. If a benchmark doesn’t tell you what tradeoffs are being made, don’t just naively trust a number.


    Unsafety

    This is a non-exhaustive list of unsafe use in this crate, and why they are used:

    • A mem::transmute in Tag::kind. Which guarantees that converting into the Kind enum which is #[repr(u8)] is as efficient as possible.

    • A largely unsafe SliceReader which provides more efficient reading than the default Reader impl for &[u8] does. Since it can perform most of the necessary comparisons directly on the pointers.

    • Some unsafety related to UTF-8 handling in musli::json, because we check UTF-8 validity internally ourselves (like serde_json).

    • FixedBytes<N>, which is a stack-based container that can operate over uninitialized data. Its implementation is largely unsafe. With it stack-based serialization can be performed which is useful in no-std environments.

    • Some unsafe is used for owned String decoding in all binary formats to support faster string processing through simdutf8. Disabling the simdutf8 feature (enabled by default) removes the use of this unsafe.

    To ensure this library is correctly implemented with regards to memory safety, extensive testing and fuzzing is performed using miri. See tests for more information.


    Footnotes

    1. As in Müsli should be able to do everything you need and more.

    2. This is strictly not a binary serialization, but it was implemented as a litmus test to ensure that Müsli has the necessary framework features to support it. Luckily, the implementation is also quite good!

    Visit original content creator repository https://github.com/udoprog/musli
  • Solana-arbitrage-bot

    solana arbitrage bot

    happy searching 🙂

    layout

    • client/: off-chain arbitrage bot code
    • program/: on-chain swap program
    • pools/: dex pool metadata
    • onchain-data/: analysis of other arbitrage swaps
    • mainnet-fork/: fork mainnet account states to test swap input/output estimates

    each folder contains a corresponding README.md which explains how it works

    dexs supported

    • serum
    • aldrin
    • saber
    • mercurial
    • orca

    other notes

    • we use rust unit tests (with mainnet forking) to ensure our swap quotes are correct (quoted swap amount = actual swap amount)
    • to figure out how to interface with each dex (most dont have rust sdks or even public ts sdks) we reverse engineer how to interact with them through the jupiter-swap sdk (analyzing the npm package folder bc its also not public)
    • in the client we use a brute-force approach to find arb opportunities instead of negative cycle algos bc its much faster and can find all opportunities
    • we also dont calculate the optimal swap input amount for each arb bc its faster to spam multiple decreasing amounts and let the largest one land (this is what the winner of the arbitrage opportunities was doing – eg, send tx with input size N, N/2, N/4, …)
    • why do we need an on-chain swap program? checkout this post out — if you swap from A -> B -> C you might get more/less of B than expected which effects the swap instruction to C

    why share this alpha

    the life of a lone searcher is a lonely one where you cant share what you find or share your code – while working on this project i realized this is not what im about and thus i open source

    Visit original content creator repository
    https://github.com/texora/solana-arbitrage-bot