Blog

  • pdf.luhui.net

    Visit original content creator repository
    https://github.com/kugeceo/pdf.luhui.net

  • Transcending-Trident

    Transcending Trident

    Download

    You can download Transcending Trident on CurseForge and Modrinth:

      CurseForge:   https://curseforge.com/minecraft/mc-mods/transcending-trident
      Modrinth:      https://modrinth.com/mod/transcending-trident

    Issue Tracker

    To keep a better overview of all mods, the issue tracker is located in a separate repository.
      For issues, ideas, suggestions or anything else, please follow this link:

        -> Issue Tracker

    Pull Requests

    Because of the way mod loader files are bundled into one jar, some extra information is needed to do a PR.
      A wiki page entry about it is available here:

        -> Pull Request Information

    Mod Description

    Requires the library mod Collective.
       This mod is part of The Vanilla Experience modpack and Serilum’s Extra Bundle mod.
    Transcending Trident improves the vanilla trident’s functionality. You can use the Riptide enchantment without rain by holding a water bucket in your other hand. By default the mod also makes the trident more powerful, but you can toggle that inside the config file.
    Configurable: ( how do I configure? )
    mustHoldBucketOfWater (default = true): When enabled, Riptide can only be used without rain when the user is holding a bucket of water.
    tridentUseDuration
    (default = 5, min 0, max 20): The amount of time a player needs to charge the trident before being able to use Riptide. Minecraft’s default is 10.
    tridentUsePowerModifier
    (default = 3.0, min 0, max 100.0): The riptide power of the trident is multiplied by this number on use. Allows traveling a greater distance with a single charge.
    The effect in action (It’s amazingly fun!):

    gif0


    ——————
    You may freely use this mod in any modpack, as long as the download remains hosted within the CurseForge or Modrinth ecosystem.
    Serilum.com contains an overview and more information on all mods available.
    Comments are disabled as I’m unable to keep track of all the separate pages on each mod.
    For issues, ideas, suggestions or anything else there is the Github repo. Thanks!

    Visit original content creator repository https://github.com/Serilum/Transcending-Trident
  • Transcending-Trident

    Transcending Trident

    Download

    You can download Transcending Trident on CurseForge and Modrinth:

      CurseForge:   https://curseforge.com/minecraft/mc-mods/transcending-trident
      Modrinth:      https://modrinth.com/mod/transcending-trident

    Issue Tracker

    To keep a better overview of all mods, the issue tracker is located in a separate repository.
      For issues, ideas, suggestions or anything else, please follow this link:

        -> Issue Tracker

    Pull Requests

    Because of the way mod loader files are bundled into one jar, some extra information is needed to do a PR.
      A wiki page entry about it is available here:

        -> Pull Request Information

    Mod Description

    Requires the library mod Collective.
       This mod is part of The Vanilla Experience modpack and Serilum’s Extra Bundle mod.
    Transcending Trident improves the vanilla trident’s functionality. You can use the Riptide enchantment without rain by holding a water bucket in your other hand. By default the mod also makes the trident more powerful, but you can toggle that inside the config file.
    Configurable: ( how do I configure? )
    mustHoldBucketOfWater (default = true): When enabled, Riptide can only be used without rain when the user is holding a bucket of water.
    tridentUseDuration
    (default = 5, min 0, max 20): The amount of time a player needs to charge the trident before being able to use Riptide. Minecraft’s default is 10.
    tridentUsePowerModifier
    (default = 3.0, min 0, max 100.0): The riptide power of the trident is multiplied by this number on use. Allows traveling a greater distance with a single charge.
    The effect in action (It’s amazingly fun!):

    gif0


    ——————
    You may freely use this mod in any modpack, as long as the download remains hosted within the CurseForge or Modrinth ecosystem.
    Serilum.com contains an overview and more information on all mods available.
    Comments are disabled as I’m unable to keep track of all the separate pages on each mod.
    For issues, ideas, suggestions or anything else there is the Github repo. Thanks!

    Visit original content creator repository https://github.com/Serilum/Transcending-Trident
  • tidb-multi-kube-cluster

    Usage

    This repository is designed to help you to create a database system between two kubernetes clusters and merge the data between two clusters in real time.

    Diagram

    Alt text

    Before you begin

    You need 2 kubernetes clusters avaiable on any cloud platforms

    Those two clusters shoud be on the same VPC! or do the VPC peering in case the clusters are on different VPCs.

    Require to be installed on your machine

    Kubectx Check out the installation here Link

    Usage of Kubectx

    You don’t have to write –context=$context every time

    kubectx ${context1}
    Switched to context "${context1}".
    



    Install TiDB operator CRDS

    kubectx ${context1}
    kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml
    



    Install TiDB operator on Cluster 1

    kubectx ${context1}
    helm repo add pingcap https://charts.pingcap.org/
    kubectl create namespace tidb-admin 
    helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.0-beta.1
    



    Install TiDB operator on Cluster 2

    Swapping context with kubectx is require in this step

    kubectx ${context2}
    helm repo add pingcap https://charts.pingcap.org/
    kubectl create namespace tidb-admin 
    helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.0-beta.1
    



    Confirm that TiDB operator is running on both clusters

    kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator
    
    output
    NAME                                       READY   STATUS    RESTARTS   AGE
    tidb-controller-manager-6d8d5c6d64-b8lv4   1/1     Running   0          2m22s
    tidb-scheduler-644d59b46f-4f6sb            2/2     Running   0          2m22s
    



    Get DNS service in our cluster

    kubectl get svc -n kube-system --context=${context1}
    
    output
    NAME                          TYPE           CLUSTER-IP    EXTERNAL-IP     PORT                     
    kube-dns                      ClusterIP   10.0.0.10     <none>   53:32586/UDP,53:31971/TCP   
    metrics-server                ClusterIP      10.0.45.125   <none>          443/TCP                     
    npm-metrics-cluster-service   ClusterIP      10.0.62.42    <none>          9000/TCP           
    

    External Ip of kube-dns is not avaiable in case of type of kube-dns service is ClusterIP, if we change the type of kube-dns service to be LoadBalancer, the external-ip will be avaiable

    kubectl edit svc kube-dns -n kube-system --context=${context1}
    
    Place to edit

    Alt text

    kubectl get svc -n kube-system --context=${context1}
    # Do this with the other cluster and copy both of the External IP
    # In this example for ${cluster1} ip is 20.247.240.14
    # In this example for ${cluster2} ip is 20.205.255.74
    

    Alt text



    Forward DNS IP to make both clusters can communicate each other

    Edit coredns/corednscluster1.yaml with your External Ip from cluster2 Edit coredns/corednscluster2.yaml with your External Ip from cluster1

    Example coredns/corednscluster1.yaml

        apiVersion: v1
        kind: ConfigMap
        metadata:
        name: coredns-custom
        namespace: kube-system
        data:
        puglife.server: | 
            hello-2.svc.cluster.local:53 {
            errors
            cache 30
            forward . 20.205.255.74 {
                force_tcp
            }
        }
    
        # puglife.server : any name with .server is fine
        # hello-2 : your namespace that tidbcluster will be allocated in $cluster2
        # 20.205.255.74 : External DNS IP from cluster 2
    



    Restart Core DNS

    kubectl -n kube-system rollout restart deployment coredns --context=${context1}
    kubectl -n kube-system rollout restart deployment coredns --context=${context2}
    



    Apply TiDBCluster

    kubectl apply -f tidbcluster/tidbcluster1.yaml --context=${context1}
    kubectl apply -f tidbcluster/tidbcluster2.yaml --context=${context2}
    



    Check Status TiDB Cluster on both clusters

    kubectl get po -n hello-1 --context=${context1}
    kubectl get po -n hello-2 --context=${context2}
    
    Output
    # Cluster: ${context1}
    NAME                                     READY   STATUS     
    tidbcluster1-discovery-5c49fdd79-2njvh   1/1     Running   
    tidbcluster1-pd-0                        1/1     Running   
    tidbcluster1-tidb-0                      2/2     Running   
    tidbcluster1-tikv-0                      1/1     Running   
    
    # Cluster: ${context2}
    NAME                                      READY   STATUS    
    tidbcluster2-discovery-56886846f8-pnzkc   1/1     Running   
    tidbcluster2-pd-0                         1/1     Running   
    tidbcluster2-tidb-0                       2/2     Running   
    tidbcluster2-tikv-0                       1/1     Running   
    



    Verify that you can connect to the database using mysql workbench or mysql client

    We need to forward port to our machine and create a connection in mysql workbench or mysql client (in this case, we will use mysql workbench)

    kubectl --context=${context1} port-forward -n hello-1 svc/tidbcluster1-tidb 15000:4000
    
    Output
        Forwarding from 127.0.0.1:15000 -> 4000
        Forwarding from [::1]:15000 -> 4000
    



    Open mysql workbench

    Alt text



    Create mock data with table user

    use test;
    
    CREATE TABLE User (
    UserID INT PRIMARY KEY,
    FirstName VARCHAR(50),
    LastName VARCHAR(50),
    Age INT,
    Email VARCHAR(100)
    );
    
    INSERT INTO User (UserID, FirstName, LastName, Age, Email)
    VALUES (1, 'John', 'Doe', 30, 'john.doe@example.com');
    
    INSERT INTO User (UserID, FirstName, LastName, Age, Email)
    VALUES (2, 'Jane', 'Smith', 25, 'jane.smith@example.com');
    
    SELECT * FROM test.User;
    
    Output >



    Check that the data from both clusters have been merged

    Close Forward port in cluster1 and forward port for db cluster2

    kubectl --context=${context2} port-forward -n hello-2 svc/tidbcluster2-tidb 15000:4000
    



    Query some data

    SELECT * FROM test.User;
    # If you get the same output from the db cluster1 that mean, the data from both clusters have been merged
    



    Let Backup our data with minio

    You can follow this docs, if you prefer: Link

    Create namespace name: minio-dev

    kubectl create ns minio-dev --context=${context1}
    
    kubectl apply -f minio/minio-dev.yaml --context=${context1}
    
    Output
    NAME    READY   STATUS    
    minio   1/1     Running 
    



    Port Forward the minio pod and create access key, secret key (save the keys properly, because you can’t rewatch it)

    kubectl port-forward pod/minio 9000 9090 -n minio-dev
    



    Create a bucket and copy the name of the bucket

    In my case is name: my-bucket



    Setup backing data for the db cluster

    You can follow this docs, if you prefer: Link

    kubectl create namespace backup-test
    
    kubectl apply -f backup/backup-rbac.yaml -n backup-test
    
    kubectl create secret generic s3-secret --from-literal=access_key=xxx --from-literal=secret_key=yyy -n backup-test --context=${context1}
    
    kubectl create secret generic backup-tidb-secret --from-literal=password=mypassword -n backup-test --context=${context1}
    
    
    kubectl get secret -n backup-test --context=${context1}
    
    Output
    NAME                 TYPE     DATA   AGE
    backup-tidb-secret   Opaque   1      9h
    s3-secret            Opaque   2      9h
    



    Copy the External IP of TiDB cluster context1 (db cluster1)

    kubectl get svc -n hello-1 --context=${context1}
    

    Alt text



    Copy minio IP

    kubectl get pod -n minio-dev -o wide --context=${context1}
    

    Alt text

    Create an admin user on db cluster1

    kubectl --context=${context1} port-forward -n hello-1 svc/tidbcluster1-tidb 15000:4000
    
    # Execute this command
    # Password should be the same as secret
    # kubectl create secret generic backup-tidb-secret --from-literal=password=mypassword -n backup-test
    
    CREATE USER 'admin1' IDENTIFIED BY 'mypassword'; 
    GRANT ALL ON test.* TO 'admin1';
    GRANT ALL ON mysql.* TO 'admin1';
    SHOW GRANTS FOR 'admin1';
    



    Edit backup/full-backup-s3.yaml

    apiVersion: pingcap.com/v1alpha1
    kind: Backup
    metadata:
    name: demo1-full-backup-s3
    namespace: backup-test 
    spec:
    backupType: full
    br:
        cluster: tidbcluster1 <- same name in tidbcluster/tidbcluster1.yaml
        clusterNamespace: hello-1 <- namespace that the tidbcluster is allocated
    from:
        host: "20.247.251.119" <- External IP of tidbcluster
        port: 4000 
        user: admin1 <- User that you created in db 
        secretName: backup-tidb-secret 
    s3:
        provider: aws <- Can be anything you prefer
        secretName: s3-secret
        endpoint: http://10.1.1.44:9000 <- minio IP
        bucket: my-bucket <- Bucket that created in the minio dashboard
        prefix: my-full-backup-folder
    



    Apply full backup to Minio

    kubectl apply -f backup/full-backup-s3.yaml
    # if data is exist, then you successful backup data with minio
    

    Alt text

    Visit original content creator repository https://github.com/Cloud-NC-Engineering-Thailand/tidb-multi-kube-cluster
  • usocial

    This project has been discontinued

    usocial (2018 – 2022)

    Most open source projects are simply forgotten and die a slow and lonely death.

    usocial was different. Same same, but different.

    It started as a personal RSS feed reader, evolved into a Podcasting 2.0 client, got released on Umbrel OS…

    You could run usocial on your own Umbrel personal server, follow your favourite blogs, subscribe to your favourite podcasts…

    usocial would connect to your own Lightning node and, while listening to a podcast episode, you could send sats directly to the podcaster. The payment would even automatically be split, according to the podcaster’s desire, and go to different recipients.

    It had a terrible UI, but it worked beautifully. It was my way of keeping up to date with podcasts and blogs and tipping creators.

    Then, something happened.

    usocial didn’t die, it just evolved into something else: Servus (2022-).

    I realized that more important than following blogs and podcasts is publishing your own content. Only after there is a solid way for anyone to self-host their web site and publish content will there be a need for a self-hosted way to subscribe to content.

    I used to be a fan of Jekyll, but I realized that it is not for the mere mortals to use. I hated WP, which I had used since 2005 or so. WP was more user-friendly than Jekyll and other SSGes, but it just did not click with me.

    I had written a few CMSes before (2008-2012), mostly trying to host my photoblog in a pre-Flickr era and to build a sort-of online travel log. See nuages, tzadik, feather and travelist.

    Then it all clicked. The missing piece was a CMS. I could take a lot of ideas from Jekyll, while trying to keep the usability of WP.

    That is how Servus was born and that was the end of usocial.

    It didn’t die, it just evolved.

    Setting up the development environment

    1. Clone the repo

      git clone https://github.com/ibz/usocial.git && cd usocial

    2. Set up a venv

      python3 -m venv venv
      source venv/bin/activate
      pip install --upgrade pip
      pip install -e .
      
    3. Create an “instance” directory which will store your database and config file.

      mkdir instance

    4. Generate a secret key (this is required by Flask for CSRF protection)

      echo "SECRET_KEY = '"`python3 -c 'import os;print(os.urandom(12).hex())'`"'" > instance/config.py

    5. Export the environment variables (FLASK_APP is required, FLASK_ENV makes Flask automatically restart when you edit a file)

      export FLASK_APP=usocial.main FLASK_ENV=development

    6. Create the database (this will also create the default user, “me”, without a password)

      flask create-db

    7. Run the app locally

      flask run

    Visit original content creator repository
    https://github.com/ibz/usocial

  • balena-nfs

    Balena NFS Server and Client Project

    Diagram

    Grafana 9 Balena

    Introduction

    The Balena NFS project demonstrates how to deploy the NFS Server and Client in balenaCloud.

    Read more in the Balena blog post, “Using NFS Server to share external storage between containers“.

    Using Network File System (NFS) in Balena | Share external storage between containers

    Requirements

    • balenaOS 2.105.19 is required for Nvidia Jetson AGX Orin Devkit with NFS version 4.
    • balenaOS 2.99.27+rev1 is required for NFS version 4.
    • balenaOS 2.98 is required for NFS version 3.

    balenaCloud

    The Balena NFS project can be deployed directly to balenaCloud:

    Deploy with balena

    Features

    • Includes a NFS Server build on top of the PostgreSQL Alpine image using OpenRC to manage NFS services.
    • Supports various environment variables to specify storage label, mount point, etc.
    • Includes a NFS Client build on top of the NGINX Alpine image using custom Entrypoint script to mount NFS export.
    • Provides Grafana Dashboard to manage running services and display configuration using Supervisor API. Default Grafana username and password is admin/admin.
    • Supports NFS version 4 and version 3.
    • Allows to set NFS in sync or async modes.

    Tested

    • Nvidia Jetson AGX Orin Devkit (jetson-agx-orin-devkit)
    • Raspberry Pi4-64 (raspberrypi4-64)
    • Jetson Xavier (jetson-xavier)
    • x86-64 (genericx86-64-ext)

    Environment Variables

    Environment Variable Value Description
    STORAGE_LABEL storage External Storage ID, if not found tmpfs will be used instead.
    STORAGE_MOUNT_POINT /mnt/nvme Local mount point to mount Storage or tmpfs.
    POSTGRES_PASSWORD postgres Password for the PostgreSQL database.
    PGDATA /mnt/nvme/postgresql/data PostgreSQL path on the Storage or tmpfs mount point.
    NFS_HOST localhost NFS host, should be localhost for the local container.
    NFS_HOST_MOUNT / NFS exported mount. Set full path /mnt/nvme for NFS version 3.
    NFS_MOUNT_POINT /mnt/nvme Mount point to mount NFS export.
    NFS_SYNC_MODE async Async or Sync mode.
    NFS_VERSION nfs Set nfs4 to force use NFS version 4.

    NFS version 3

    To support NFS version 3 please update Environment Variables:

    NFS3

    Balena Application

    The Balena Application for Grafana allows to display device information and manage services using Balena Supervisor API.

    Working in a productive alliance, Balena, Grafana, and the Balena Application plugin simplify managing a network of non-homogenous IoT devices.

    Balena Application

    Feedback

    We love to hear from users, developers, and the whole community interested in this project. These are various ways to get in touch with us:

    • Ask a question, request a new feature, and file a bug with GitHub issues.
    • Sponsor our open-source plugins for Grafana with GitHub Sponsor.
    • Star the repository to show your support.

    License

    • Apache License Version 2.0, see LICENSE.
    Visit original content creator repository https://github.com/VolkovLabs/balena-nfs
  • alat-tulis

    Transformasi Pemasaran Alat Tulis Melalui Analisis Asosiasi Data

    Solution: SMART (Sistem Rekomendasi Produk Alat Tulis Dan Peralatan Kantor)

    Ringkasan

    Permintaan untuk produk alat tulis dan perlengkapan kantor di Indonesia meningkat seiring dengan pertumbuhan sektor pendidikan dan bisnis, namun banyak toko retail kesulitan mengoptimalkan promosi produk berdasarkan data transaksi (Ismarmiaty, 2023). SMART (Sistem Rekomendasi Produk Alat Tulis Dan Peralatan Kantor) dirancang untuk membantu toko alat tulis dan perlengkapan kantor meningkatkan efektivitas promosi produk dengan memberikan rekomendasi berbasis data transaksi terkini. Dengan memanfaatkan teknik Market Basket Analysis (MBA) dan algoritma FP-Growth, aplikasi ini melakukan analisis mendalam terhadap pola pembelian dan mengidentifikasi asosiasi produk yang sering dibeli secara bersamaan (Sagin, 2018). Hasil analisis digunakan untuk mengidentifikasi beberapa jenis rekomendasi produk teratas yang paling relevan untuk dipromosikan setiap periode. SMART tidak hanya menyederhanakan pengambilan keputusan bagi pegawai toko, tetapi juga memungkinkan integrasi dengan sistem operasional yang ada melalui API, sehingga data dan hasil rekomendasi dapat disinkronkan secara otomatis. Salah satu rekomendasi kunci adalah produk yang dijual dengan kategori Percetakan dan Kertas yang jaraknya dekat dengan produk kategori Perlengkapan Kantor sehingga meningkatkan efektivitas strategi pemasaran. Aplikasi ini memberikan dampak signifikan dengan mengurangi ketergantungan pada promosi manual dan meningkatkan penjualan melalui pendekatan yang lebih cerdas dan tepat sasaran. Pengembangan selanjutnya dapat difokuskan pada peningkatan algoritma agar semakin adaptif terhadap tren pasar lokal.

    pexels-pacifica-yang-1047848040-20415771 Photo by Pacifica Yang: https://www.pexels.com/photo/interior-of-a-bookstore-20415771/

    References

    Ismarmiaty, I., & Rismayati, R. (2023). Product Sales Promotion Recommendation Strategy with Purchase Pattern Analysis FP-Growth Algorithm. Sinkron. https://doi.org/10.33395/sinkron.v8i1.11898. Sagin, A., & Ayvaz, B. (2018). Determination of Association Rules with Market Basket Analysis: Application in the Retail Sector. , 7. https://doi.org/10.21533/SCJOURNAL.V7I1.149. Dataset: https://www.kaggle.com/datasets/dickyaryanto/data-transaction-in-2021-2023

    SMART Documentation

    image

    image

    image

    image

    image

    image

    image image image image

    image image

    image image

    image image

    image

    image

    image

    image

    image

    image

    Uploading image.png…

    Author SicK:

    • Roni Antonius Sinabutar (Data Engineer & AI Engineer) [ETL Pipeline and ML Pipeline (for recommendation)]
    • Nurul Aini Komarudin [UI/UX, Data Analysis]
    • Aldy Charlie Rizky [Backend, Cloud]
    • Sufadlan Nugraha [Data Analysis]
    Visit original content creator repository https://github.com/roniantoniius/alat-tulis
  • astro-cloudinary

    GitHub Workflow Status npm npm bundle size GitHub

    Astro Cloudinary

    High-performance image delivery and uploading at scale in Astro powered by Cloudinary.

    FeaturesGetting StartedCommunity & SupportContributing

    This is a community library supported by the Cloudinary Developer Experience team.

    ✨ Features

    • Automatically optimize images and deliver in modern formats
    • Remove backgrounds from images
    • Dynamically add image and text overlays to images
    • AI-based cropping and resizing
    • Transform images using color and effects
    • Generate Open Graph Social Media cards on the fly
    • Drop-in Upload Widget
    • …all at scale with Cloudinary

    🚀 Getting Started

    Installation

    • Install astro-cloudinary with:
    npm install astro-cloudinary
    
    • Add an environment variable with your Cloud Name:
    PUBLIC_CLOUDINARY_CLOUD_NAME="<Your Cloud Name>"
    

    Adding an Image

    import { CldImage } from 'astro-cloudinary';
    
    <CldImage width="600" height="600" src="https://github.com/cloudinary-community/<Public ID or Cloudinary URL>" alt="<Alt Text>" />
    

    Learn more about CldImage on the Astro Cloudinary Docs

    ❤️ Community & Support

    🛠 Contributing

    Please read CONTRIBUTING.md prior to contributing.

    Working Locally

    Installation

    This project is using pnpm as a way to manage dependencies and workspaces.

    With the project cloned, install the dependencies from the root of the project with:

    pnpm install
    

    Configuration

    To work on the project, you need to have an active Cloudinary account.

    With the account, configure a .env file inside of docs with:

    PUBLIC_CLOUDINARY_CLOUD_NAME="<Your Cloudinary Cloud Name>"
    PUBLIC_CLOUDINARY_API_KEY="<Your Cloudinary API Key>"
    CLOUDINARY_API_SECRET="<Your Cloudinary API Secret>"
    
    PUBLIC_ASSETS_DIRECTORY="assets"
    

    Note: The Cloudinary account can be free, but some features may not work beyond free tier like Background Removal without enabling the add-on

    The Cloud Name is required for all usage, where the API Key and Secret currently is only used for Upload Widget usage. The Upload Preset is additionally used for the Upload Widgets.

    Uploading Example Images

    In order to run the Docs project, you need to have the images and videos referenced available inside of your Cloudinary account.

    Most of the images and videos used in the project take advantage of the sample assets included in every Cloudinary account, so some may work out-of-the-box, but not all.

    To upload the remaining assets, navigate to the scripts directory and first create a new .env file with:

    CLOUDINARY_CLOUD_NAME="<Your Cloudinary Cloud Name>"
    CLOUDINARY_API_KEY="<Your API Key>"
    CLOUDINARY_API_SECRET="<Your API Secret>"
    

    By default, the images and videos inside of scripts/assets.json will be uploaded to the “assets” directory inside of your Cloudinary account. To change the location, add the CLOUDINARY_ASSETS_DIRECTORY environment variable with your preferred directory:

    CLOUDINARY_ASSETS_DIRECTORY="<Your Directory>"
    

    Note: You will then need to update the /docs/.env file to reference the same directory.

    To run the script, install the dependencies:

    pnpm install
    

    Then run the upload script with:

    pnpm upload
    

    Uploading Example Collections

    Collections are groups of images that are showcased using the cldAssetsLoader helper.

    The directories that make up the sample images include too many images to reasonably ask a contributor to upload.

    We have a few options then.

    1. Skip uploading the collections

    If you’re not working on cldAssetsLoader, or you can test using the single example that utilizes the samples directory, you may not need to worry about this.

    1. Change the collections location

    You could update these directories in the docs/src/content/config.ts file to directories that already exist in your account, such as other sample directories.

    1. Upload Manually

    If you want to have assets available to test this out, you can create the following directories and include some assets inside.

    • collection
    • ecommerce/fashion
    • ecommerce/sneakers

    A good way to handle this is to download some images from Unsplash or your favorite stock photo site.

    Running the Project

    Once installed and configured, from the root of your project run:

    pnpm dev
    

    The project will now be available at http://localhost:4321 or the configured local port.

    Contributors

    Colby Fayock
    Colby Fayock

    💻 📖
    Mateusz Burzyński
    Mateusz Burzyński

    💻
    Hunter Bertoson
    Hunter Bertoson

    💻
    Arpan Patel
    Arpan Patel

    📖
    Saai Syvendra
    Saai Syvendra

    📖
    Raghav Mangla
    Raghav Mangla

    📖
    Kieran Klukas
    Kieran Klukas

    💻
    S. M. V.
    S. M. V.

    📖
    Michael Uloth
    Michael Uloth

    💻
    Justin Philpott
    Justin Philpott

    📖
    Visit original content creator repository https://github.com/cloudinary-community/astro-cloudinary
  • hitchcock

    “There is no terror in the bang, only in the anticipation of it.”

    — Alfred Hitchcock

    Hitchcock npm version

    Hitchcock is a debugging tool for React Suspense. It wraps your calls to React.lazy(), provides a simple cache (based on react-cache) and let you pause, delay or invalidate your promises.

    🚨 EXPERIMENTAL 🚨

    Use this only for experimenting with the new React Concurrent Mode. Hitchcock is inefficient and unstable. Also, I have no idea what I’m doing.

    Demos

    The code is in the examples folder.

    Usage

    Try it on CodeSandbox

    Add the dependency:

    $ yarn add hitchcock

    Director

    Import the Director component and add it somewhere in your app:

    import { Director } from "hitchcock";
    
    function YourApp() {
      return (
        <Director>
          <YourStuff />
        </Director>
      );
    }

    Lazy

    Instead of using React.lazy import lazy from hitchcock:

    import { lazy } from "hitchcock";
    
    const HomePage = lazy(() => import("./components/HomePage"));
    
    // Hitchcock's lazy accepts a second parameter with the name of the component:
    const ArtistPage = lazy(() => import("./components/ArtistPage"), "ArtistPage");
    // it's optional, but recommended, it isn't always easy to guess the name from the import

    createResource

    import { createResource } from "hitchcock";
    
    const BeerResource = createResource(
      id =>
        fetch(`https://api.punkapi.com/v2/beers/${id}`)
          .then(r => r.json())
          .then(d => d[0]),
      id => `beer-${id}`
    );
    
    function Beer({ beerId }) {
      const beer = BeerResource.read(beerId);
      return (
        <>
          <h1>{beer.name}</h1>
          <p>{beer.description}</p>
        </>
      );
    }

    createResource has two parameters. The first one is a function that returns a promise. The second one is a function that returns an id, that id is used as the key in the cache and also is used as the name of the resource in the debugger.

    It returns a resource with a read method that will suspend a component until the resource is ready (when the promise resolves).

    Waterfalls

    React docs warn about using Suspense as a way to start fetching data when a component renders. The recommended approach is to start fetching before rendering, for example, in an event handler. Hitchcock doesn’t solve this problem, but it provides a preload method if you want to try:

    import React, { Suspense } from "react";
    import ReactDOM from "react-dom";
    import { createResource, Director } from "hitchcock";
    
    const BeerResource = createResource(
      id =>
        fetch(`https://api.punkapi.com/v2/beers/${id}`)
          .then(r => r.json())
          .then(d => d[0]),
      id => `beer-${id}`
    );
    
    function App() {
      const [beerId, setBeerId] = React.useState(0);
      const deferredBeerId = React.useDeferredValue(beerId, { timeoutMs: 1000 });
    
      const showBeer = deferredBeerId > 0;
    
      const handleChange = e => {
        const newBeerId = +e.target.value;
        BeerResource.preload(newBeerId);
        setBeerId(newBeerId);
      };
    
      return (
        <Director>
          Beer # <input type="number" value={beerId} onChange={handleChange} />
          <Suspense fallback={<div>{`Loading beer #${beerId}...`}</div>}>
            {showBeer && <Beer beerId={deferredBeerId} />}
          </Suspense>
        </Director>
      );
    }
    
    function Beer({ beerId }) {
      const beer = BeerResource.read(beerId);
      return (
        <>
          <h1>{beer.name}</h1>
          <p>{beer.description}</p>
        </>
      );
    }
    
    const container = document.getElementById("root");
    ReactDOM.createRoot(container).render(<App />);

    Contributing

    $ git clone git@github.com:pomber/hitchcock.git
    $ cd hitchcock
    $ npx lerna bootstrap

    Run the examples:

    $ yarn start:example movies
    $ yarn start:example suspensify

    Publish new version:

    $ yarn build:packages
    $ npx lerna publish

    License

    Released under MIT license.

    Visit original content creator repository https://github.com/pomber/hitchcock
  • Random-Saying-Generator

    Random Saying Generator

    Web-service with RESTful API that allows to generate a random saying, like or dislike it and add own saying (adding a saying is currently unavailable).
    Application uses Play framework and Cassandra as a database.

    Running

    # developing mode
    ./gradlew run
    

    or

    # release mode
    ./gradlew dist
    ./build/stage/playBinary/bin/playBinary
    

    And then go to http://localhost:9000/sayings to see the running web application.

    Database configuration commands:

    # drops existing keyspace with all data
    ./gradlew dropSchema
    
    # creates keyspace and necessary tables (just executes commands from "conf/create.cql" file)
    ./gradlew createSchema
    
    # fills existing tablies with data from "conf/init_data.txt" file
    ./gradlew fillTables
    

    API

    GET /sayings HTTP/1.1

    controllers.Main endpoint, from it you can generate random saying or add own.

    RESPONSE

    HTTP/1.1 200 OK

    Content-Type: application/hal+json

    RESPONSE BODY

    {
      "_links": {
        "self": { "href": "/sayings" },
        "random": { "href": "/sayings/random" },
        "add": { "href": "/sayings/new" }
      }
    }

    GET /sayings/{id} HTTP/1.1

    Gets saying with a given id.

    RESPONSE

    HTTP/1.1 200 OK

    Content-Type: application/hal+json

    RESPONSE BODY

    {
      "saying": {
        "text": "Text of saying",
        "author": "Author",
        "likes": 120,
        "dislikes": 15
      },
      "_links": {
        "self": { "href": "/sayings/{id}" } },
        "rate": { "href": "/sayings/{id}/rate" },
        "random": { "href": "/sayings/random" },
        "add": { "href": "/sayings/new" }
    }

    GET /sayings/random HTTP/1.1

    Gets random saying.

    RESPONSE

    Exactly the same as for request GET /sayings/{id} HTTP/1.1

    POST /sayings/new HTTP/1.1

    Adds new saying to the system.

    Accept: application/json

    REQUEST BODY

    {
      "saying": {
        "text": "Text of saying",
        "author": "Author"
      }
    }

    RESPONSE

    HTTP/1.1 201 Created

    Location: /sayings/{id}

    If such saying already exists:

    RESPONSE

    HTTP/1.1 409 Conflict

    Location: /sayings/{id}

    Content-Type: application/json

    RESPONSE BODY

    {
      "message": "Same or very similar saying already exists."
    }

    POST /sayings/{id}/rate HTTP/1.1

    Rate the saying as liked or disliked.
    Value of field “rate” shoud be 1 or -1, otherwise “400 Bad request” will be returned.


    Accept: application/json

    REQUEST BODY

    {
      "rate": 1
    }

    RESPONSE

    HTTP/1.1 204 No Content

    Supported HTTP Status Codes:

    200 OK Successful request.

    201 Created Resource posted in request was successfully created.

    204 No Content The server has fulfilled the request but does not need to return an entity-body.

    400 Bad Request Wrong URI or JSON representation of data.

    404 Not found The requested resource could not be found.

    409 Conflict Same or very similar resource already exists.

    500 Internal Server Error Unexpected condition was encountered on server and request can’t be handled.

    Visit original content creator repository
    https://github.com/mikhail-kukuyev/Random-Saying-Generator