Category: Blog

  • eu_crime_stats

    Project Name:

    Crime and Justice Statistics for the EU

    The Purpose of the Project:

    I have obtained a dataset from the Central Statistics Office of Ireland ( http://www.cso.ie ) which describes the levels of crime by type
    for each country within the EU.
    For the purpose of this project I have focused on the smaller countries with populations of less than 10 million people.
    The reason for this is that it would not make sense to compare the crime statistics of the smaller countries against the bigger countries.
    E.G. Northern Ireland Vs Germany

    The dataset also contains details on the criminal justice system for each country such as Policing and Prison Population etc

    The data range is between the years 2008 and 2014

    Technologies Used:

    Python (Flask),
    MongoDB,
    D3.js,
    DC.js,
    Crossfilter.js,
    JQuery,
    BootStrap,
    CSS3,
    HTML5

    Installation

    1. Clone the REPO to local git clone https://github.com/cormacio100/eu_crime_stats.git

    2. Import the eu_crime_stats.json file from the db folder into a MongoDB collection based on the below settings

      MONGODB_HOST = ‘localhost’,
      MONGODB_PORT = 27017,
      DBS_NAME = ‘projectModule2’,
      COLLECTION_NAME = ‘eu_crime_stats’

    3. Run the project through program Pycharm and then access the site through a browser at address http://127.0.0.1:5000/

    Code Testing and Validation

    HTML – https://validator.w3.org/
    CSS – https://jigsaw.w3.org/css-validator/

    Credits

    Central Statistics Office of Ireland

    TO DO

    Statistics for countries with population of greater than 10 Million people

    Visit original content creator repository
    https://github.com/cormacio100/eu_crime_stats

  • jsonschema2atd

    jsonschema2atd

    Generate an ATD file from a JSON Schema / OpenAPI document.

    Installation

    The package is available on opam.

    opam install jsonschema2atd
    

    If you wish to install the development version you can do so with:

    make install

    Usage

    Generate an ATD file from a JSON Schema:

    jsonschema2atd ../path-to-jsonschema.json

    Generate an ATD file from an OpenAPI document:

    jsonschema2atd --format openapi ../path-to-openapi.json

    You can call jsonschema2atd and atdgen in your dune file to generate OCaml types and JSON serializers/deserializers from your JSON Schema or OpenAPI document:

    ; Add jsonschema2atd.runtime to have access to the oneOf serialization adapter (for variant unboxing).
    (library
     ...
     (libraries ... jsonschema2atd.runtime))
    
    ; Generate dashboard_gen.atd from the dashboard_types_gen.json OpenAPI document with jsonschema2atd.
    (rule
     (target dashboard_gen.atd)
     ; Store the generated .atd file in the code. 
     (mode promote)
     (deps ../grok/dashboard_types_gen.json)
     (action
      (with-stdout-to
       %{target}
       (run
        %{bin:jsonschema2atd} -f openapi
        %{deps}))))
    
    ; Generate dashboard_gen_t.mli, dashboard_gen_t.ml, dashboard_gen_j.mli, and dashboard_gen_j.ml from dashboard_gen.atd with atdgen.
    (rule
     (targets
      dashboard_gen_t.mli
      dashboard_gen_t.ml
      dashboard_gen_j.mli
      dashboard_gen_j.ml)
     (deps dashboard_gen.atd)
     (action
      (progn
       (run %{bin:atdgen} -j -j-std -j-defaults %{deps})
       (run %{bin:atdgen} -t %{deps}))))
    

    Other options can be used to control the output:

    • --json-ocaml-type KEYWORD:MODULE.PATH:TYPE-NAME to control the defitiion of
      the json type used as default/fallback.
    • --only-matching REGEXP to limit the JSONSchema types to convert, when used
      together with --avoid-dangling-refs, missing types are replaced with json.

    See also jsonschema2atd --help.

    ToDo

    • Base types
    • Records
    • Nullable
    • String enums
    • Integer enums
    • Other primitive enums
    • Refs (OpenAPI format)
    • OneOf (Only serialization is supported)
    • not
    • anyOf
    • allOf

    Visit original content creator repository
    https://github.com/ahrefs/jsonschema2atd

  • RestaurantBill

    Restaurant Billing System

    This C project, developed during my first semester in college, is a simple restaurant billing system. It allows users to create invoices, view all previous invoices, search for invoices by customer name, and save the invoices to a file.

    Features

    • Create Invoice: Users can create invoices by entering customer details, items purchased, quantity, and unit price. The invoice includes the date, customer name, items purchased, quantity, total price, discounts, and grand total.
    • Show All Invoices: Users can view all previous invoices stored in the file.
    • Search Invoice: Users can search for a specific invoice by entering the customer’s name.
    • Save Invoice: Users have the option to save the invoice to a file.

    How to Run

    To run the program, compile the source code using any C compiler. For example, if you’re using GCC, you can compile the program with the following command:

    gcc -o restaurant_billing restaurant_billing.c

    After compilation, execute the program:

    ./restaurant_billing

    Usage

    Follow the on-screen prompts to navigate through the menu options:

    1. Create Invoice: Enter customer details, items purchased, quantity, and unit price.
    2. Show All Invoices: View all previous invoices.
    3. Search Invoice: Enter the customer’s name to search for a specific invoice.
    4. Exit: Exit the program.

    File Management

    The program stores the invoices in a file named Restaurant.dat. Make sure to handle file permissions and ensure that the file is accessible to the program.

    Feel free to explore and modify the source code to enhance the functionality or customize it according to your requirements.

    Visit original content creator repository
    https://github.com/priyanshuahir000/RestaurantBill

  • awesome-bookmarks

    Blog 公众号 GitHub

    希望收藏爱好者能发现有用的东西,也欢迎你推荐你正在使用的 「利器」!

    其他分享:

    • 微信 :乐于交志同道合的朋友,欢迎扫一扫,备注来源哈 💯
    • 公众号:Coder魔法院 :因为懒,「不持续性」输出一些干货内容,技术、工具等 🎯

    在线阅读

    因为本人是程序猿一枚,因此,本页面收集的开发相关的书签也比较多!

    搜索

    将这个排在第一个分享的主要原因就是,搜索技能是解决问题的必备技能!很多人搜索问题时常常输入一段话,其实,有时候效率不一定有提取关键词语搜索到的内容很准确!

    IT 相关

    开发工具

    Git 相关

    • sourcegraph 配合 Github 使用,真是利器,有 Chrome 扩展
    • Git History 一篇文章介绍了如何使用它,但是我看了官网,貌似只需要将 GitHub 仓库的 「github.com」网址替换为「github-history.xyz.com」,则可以动态显示文件的提交历史,也有对应的扩展
    • BitHubLab 搜索所有主要 Git 平台上的项目,包括 GitHub,GitLab 和 BitBucket 等的项目
    • gitignore.io 在线给开发项目生成 .gitignore 文件
    • Octodex Github 的章鱼猫图片集
    • octoverse Github 技术趋势概览,比如,可以看到最新的开发语言排行等信息

    正则

    Linux

    • commandlinefu 一个命令行欢迎排行的网站
    • Man-Linux Linux 命令行帮助查询,国外的,页面很简洁
    • ManKier Linux man pages
    • ExplainShell 顾名思义,输入 Shell 语句,它会帮你分析

    网络

    Web 前端

    图标

    测试

    数据库

    API

    • devdocs 在线的开发文档,包含各个语言

    综合

    • codelf 变量取名、函数取名
    • httpbin httpbin 是一个 HTTP 测试库,你可以拿它来测试 HTTP 请求,源码地址
    • API-POI搜索服务 经纬度行政区域查询 任意提交一个经纬度坐标和一个关键词(比如美食)获得周边相关服务的位置和简介信息
    • pythontutor 可以看到 Python 执行的步骤

    算法

    镜像源

    教程

    IT 相关

    Other

    视频

    综合

    • B 站 厚脸皮放一个自己的主页,但是不常更新

    IT 相关

    影视娱乐

    阅读

    有趣的资源

    电子书

    中文资源:

    英文资源:

    社区&论坛

    IT 相关

    有趣的社区

    办公工具

    Office&PDF

    文档编辑

    绘图

    • ProcessOn 自己也在付费使用的一个款在线绘图工具,流程图、思维导图、UMP 图等,都可以绘制,力荐!!!
    • 小画桌 白板绘图
    • Cloudcraft 绘制网络设备架构图,很赞
    • 幕布
    • 百度脑图

    其他

    图片

    图片工具

    • TinyPng 在线图片压缩网站
    • Trianglify 自己动手生成三角立体背景图片
    • vectormagic 将图片转换成矢量图片
    • logoly.pro 可以自定义生成类似 porxhub 风格的图片,我的 wiki logo 就是这么做的
    • shields Github 项目的徽章就可以利用这个工具生成!
    • remove.bg 图片处理 专注于人物抠图的神器
    • carbon 生成比较精美的代码图片

    图片设计

    图片资源

    • Pixabay 免费图片,不可商用,可搜中文
    • Pexels 免费图片,不可商用
    • Unsplash 免费图片,不可商用
    • colorhub 高清无版权图片,个人和商业用途免费

    壁纸

    LOGO

    图床

    浏览器

    设计

    职场

    健身

    • musclewiki 点击肌肉块,会有对应教程动画

    数学

    历史

    • 全历史 很酷炫的网站,方便查看各个历史时期的成果!强烈推荐!

    娱乐

    软件下载

    Linux 软件

    awesome-wiki

    awesome-wiki 系列将会持续更新,目前为止有如下可供阅读的 wiki:

    说明

    Contributor

    可以采用如下方法推荐推荐你的「利器」🎯:

    如果推荐被收纳,将在下方列出贡献者!

    公众号

    公众号 Coder魔法院

    支持

    码字不易,赏个茶叶蛋吧 👇

    支持

    Visit original content creator repository https://github.com/awesome-wiki/awesome-bookmarks
  • odin-recipes

    The Odin Project This project aims to create a collection of recipes using HTML. The project is divided into iterations, with each iteration adding new features and improving the overall structure of the recipe collection.

    Iteration 1: Initial Structure

    In this iteration, an initial structure is set up for the recipe collection. An index.html file is created with basic HTML boilerplate code. The file includes an <h1> heading with the title “Odin Recipes.”

    Iteration 2: Recipe Page

    The second iteration focuses on creating a recipe page template. A new directory named /recipes is created within the project directory. Inside this directory, an HTML file is created for each recipe, named after the dish it contains. For example, lasagna.html can be used as a template for a lasagna recipe.

    The index.html file is updated to include links to the recipe pages. Each recipe link is added under the “Odin Recipes” heading using the <a> tag.

    Iteration 3: Recipe Page Content

    In this iteration, the recipe pages are enhanced with more content. Each recipe page should include the following:

    Iteration 4: Add More Recipes

    The final iteration involves adding two more recipes to the collection. These recipes should follow the same page structure as the existing recipe page. The index.html file should be updated to include links to the newly added recipes. Consider using an unordered list to organize the recipe links and prevent them from appearing on a single line.

    Once the project is completed, the following skills will be demonstrated:

    This project provides a foundation for building a more advanced recipe collection, with the possibility of adding additional features such as search functionality, responsive design, and interactive elements.

    Visit original content creator repository https://github.com/ibnaleem/odin-recipes
  • Amazon_Rekognition_DetectingPPE

    AWS PPE (Personal Protective Equipment) Detection

    A Python implementation that uses AWS Rekognition to detect personal protective equipment (PPE) in images stored in S3 buckets. The script provides detailed analysis of face covers, hand covers, and head covers for each person detected in the image.

    Prerequisites

    • Python 3.8.8
    • AWS Account with Rekognition access
    • AWS Access Key and Secret Key
    • S3 bucket containing images
    • boto3 library

    Installation

    pip install boto3

    Required Libraries

    import boto3

    Configuration

    aws_accesskey = "Your Access Key"
    aws_secretaccess = "Your Secret Access Key"
    myregion = "your-region"

    Features

    Main PPE Detection Function

    def Detect_PPE(aws_access, aws_secret, aws_region, Image, Bucket_Name):
        """
        Detects PPE equipment in images stored in S3.
        
        Args:
            aws_access: AWS access key
            aws_secret: AWS secret key
            aws_region: AWS region name
            Image: Name of image file in S3
            Bucket_Name: S3 bucket name
        
        Returns:
            int: Number of persons detected in the image
        """

    Analysis Parameters

    Default configuration:

    • Minimum confidence threshold: 80%
    • Required equipment types:
      • FACE_COVER
      • HAND_COVER
      • HEAD_COVER

    Usage Example

    person_count = Detect_PPE(
        aws_accesskey,
        aws_secretaccess,
        "us-west-2",
        "image.jpg",
        "my-bucket"
    )

    Output Information

    The analysis provides detailed information about:

    1. Person Detection:

      • Unique person IDs
      • Body parts detected
      • Confidence scores
    2. PPE Details:

      • Type of equipment detected
      • Coverage assessment
      • Confidence scores
      • Bounding box coordinates
    3. Summary Statistics:

      • Persons with required equipment
      • Persons without required equipment
      • Indeterminate cases

    Example Output Format

    Detected PPE for people in image [image_name]
    
    Detected people
    ---------------
    Person ID: 0
    Body Parts
    ----------
        FACE
            Confidence: 99.69
            Detected PPE
            ------------
            FACE_COVER
                Confidence: 99.70
                Covers body part: True
                Confidence: 99.49
    

    Features Detected

    1. Body Parts:

      • FACE
      • HEAD
      • LEFT_HAND
      • RIGHT_HAND
    2. PPE Types:

      • FACE_COVER (masks, respirators)
      • HAND_COVER (gloves)
      • HEAD_COVER (helmets, hard hats)

    Best Practices

    1. Image Quality:

      • Use clear, well-lit images
      • Ensure subjects are clearly visible
      • Consider image resolution requirements
    2. Performance:

      • Batch process multiple images when possible
      • Consider S3 transfer times
      • Monitor API usage limits
    3. Security:

      • Secure credential storage
      • Use appropriate S3 bucket permissions
      • Implement access controls

    Error Handling

    The implementation handles:

    • S3 access errors
    • Image processing failures
    • Invalid parameters
    • Service limits

    Limitations

    • Works only with images stored in S3
    • Requires minimum confidence threshold
    • Subject to AWS Rekognition service limits
    • May have reduced accuracy in poor lighting conditions

    AWS Requirements

    1. IAM Permissions:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "rekognition:DetectProtectiveEquipment",
                      "s3:GetObject"
                  ],
                  "Resource": "*"
              }
          ]
      }
    2. S3 Bucket Requirements:

      • Appropriate bucket permissions
      • Supported image formats
      • Size limits

    Support

    For AWS specific issues, refer to:

    Visit original content creator repository
    https://github.com/AShirsat96/Amazon_Rekognition_DetectingPPE

  • Amazon_Rekognition_DetectingPPE

    AWS PPE (Personal Protective Equipment) Detection

    A Python implementation that uses AWS Rekognition to detect personal protective equipment (PPE) in images stored in S3 buckets. The script provides detailed analysis of face covers, hand covers, and head covers for each person detected in the image.

    Prerequisites

    • Python 3.8.8
    • AWS Account with Rekognition access
    • AWS Access Key and Secret Key
    • S3 bucket containing images
    • boto3 library

    Installation

    pip install boto3

    Required Libraries

    import boto3

    Configuration

    aws_accesskey = "Your Access Key"
    aws_secretaccess = "Your Secret Access Key"
    myregion = "your-region"

    Features

    Main PPE Detection Function

    def Detect_PPE(aws_access, aws_secret, aws_region, Image, Bucket_Name):
        """
        Detects PPE equipment in images stored in S3.
        
        Args:
            aws_access: AWS access key
            aws_secret: AWS secret key
            aws_region: AWS region name
            Image: Name of image file in S3
            Bucket_Name: S3 bucket name
        
        Returns:
            int: Number of persons detected in the image
        """

    Analysis Parameters

    Default configuration:

    • Minimum confidence threshold: 80%
    • Required equipment types:
      • FACE_COVER
      • HAND_COVER
      • HEAD_COVER

    Usage Example

    person_count = Detect_PPE(
        aws_accesskey,
        aws_secretaccess,
        "us-west-2",
        "image.jpg",
        "my-bucket"
    )

    Output Information

    The analysis provides detailed information about:

    1. Person Detection:

      • Unique person IDs
      • Body parts detected
      • Confidence scores
    2. PPE Details:

      • Type of equipment detected
      • Coverage assessment
      • Confidence scores
      • Bounding box coordinates
    3. Summary Statistics:

      • Persons with required equipment
      • Persons without required equipment
      • Indeterminate cases

    Example Output Format

    Detected PPE for people in image [image_name]
    
    Detected people
    ---------------
    Person ID: 0
    Body Parts
    ----------
        FACE
            Confidence: 99.69
            Detected PPE
            ------------
            FACE_COVER
                Confidence: 99.70
                Covers body part: True
                Confidence: 99.49
    

    Features Detected

    1. Body Parts:

      • FACE
      • HEAD
      • LEFT_HAND
      • RIGHT_HAND
    2. PPE Types:

      • FACE_COVER (masks, respirators)
      • HAND_COVER (gloves)
      • HEAD_COVER (helmets, hard hats)

    Best Practices

    1. Image Quality:

      • Use clear, well-lit images
      • Ensure subjects are clearly visible
      • Consider image resolution requirements
    2. Performance:

      • Batch process multiple images when possible
      • Consider S3 transfer times
      • Monitor API usage limits
    3. Security:

      • Secure credential storage
      • Use appropriate S3 bucket permissions
      • Implement access controls

    Error Handling

    The implementation handles:

    • S3 access errors
    • Image processing failures
    • Invalid parameters
    • Service limits

    Limitations

    • Works only with images stored in S3
    • Requires minimum confidence threshold
    • Subject to AWS Rekognition service limits
    • May have reduced accuracy in poor lighting conditions

    AWS Requirements

    1. IAM Permissions:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "rekognition:DetectProtectiveEquipment",
                      "s3:GetObject"
                  ],
                  "Resource": "*"
              }
          ]
      }
    2. S3 Bucket Requirements:

      • Appropriate bucket permissions
      • Supported image formats
      • Size limits

    Support

    For AWS specific issues, refer to:

    Visit original content creator repository
    https://github.com/AShirsat96/Amazon_Rekognition_DetectingPPE

  • gears

    gears

    Rust

    In-progress Game Gear emulator written in Rust.

    It has a mostly complete Z80 emulator, that passes z80test (1.0 only), zexall, and FUSE tests.

    Current status

    • supports the game gear/master system SEGA rom banking
    • minimal CPU interrupt support (mode 1)
    • a VDP implementation, able to display some games
    • Small “UI” relying on winit+pixels+cpal+gilrs crates that supports input with keyboard and gamepad (both hardcoded bindings)
    • A test suite for the VDP with some ROM frames to prevent regressions
    • WASM target for in-browser emulation.

    TODO

    • Finish VDP details
      • still missing horizontal interrupt testing (H counter, line completion)
    • Polish the PSG (sound)
      • there is no filtering or downsampling strategy. A low-pass filter should do.
    • WASM is incomplete: lacks a complete UI (e.g configurable keybindings), just like the desktop version.
    • Support more game gear games. Many work, but there might be bugs.
    • It’s fast enough but there is a margin for improvement.
    • Master system support at some point because some game gear cartridges actually shipped the SMS version. It will also be useful to enjoy the wider screen in some infamously hard games on Game Gear (Sonic 2 for example).

    Demo

    The web version of the gears emulator is available online here. It should always be the latest version.

    Learned lessons

    Over the course of writing this emulator, I took a pause to reflect on some intricacies of the Z80, and gave a talk on Z80’s “last secrets” at FOSDEM 2022. It’s not exhaustive, and there are many more such secrets that have been discovered in the past 10 years by the Z80 emulation community.

    I also gave two talks at FOSEDM 2024, one about WASM on the web with Rust, using gears as an example. And another about my advice on how to start to write an emulator.

    I also wrote the following updates:

    Visit original content creator repository https://github.com/anisse/gears
  • abench-management-console

    1. Implementig A-Bench
    1. Getting started
    1. Additional information

    Implementig A-Bench

    This is my master thesis project. The main goal is to make installing, setting up and implementing the Big Data Benchmark A-Bench easier and to automate the process as much as possible. Using HTML, python3, flask, pandas and some other tools I created a WebUI management console for easier control over the setup of the infrastructure, running the benchmark and visualizing the results in a few charts which gives information about some metrics like CPU, memory and file system usage.

    alt text

    Requirements:

    • Iternet connection
    • Ubuntu 18.04 LTS (clean install)
    • Modern web browser like Chromium or Mozilla Firefox

    Tech

    The ABench management console uses a number of open source projects to work properly:

    • python3 – Python Programming Language version 3.6
    • Flask – Flask is a microframework for Python based on Werkzeug, Jinja 2 and good intentions
    • pandas – pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language
    • chart.js – Simple yet flexible JavaScript charting for designers & developers

    Getting started

    1. Step:

    • Download the repository into home/user/ in order to work properly
    • Go go project folder “scripts”, make it executable and run install_requirements.sh as root to install missing tools if any and download a GitHub repository for creating A-Bench infrastructure. An action from the user is required during the installation [Press ENTER to continue]
    $ chmod +x install_requirements.sh
    $ sudo ./install_requirements.sh
    • To start the main WebUI run the python script in terminal as root:
    $ sudo python3 abench-management-console.py
    • Verify the deployment by navigating to your server address in your preferred browser
    http://127.0.0.1:5000

    2. Step:

    • On the homepage there are three columns of buttons to the left and text box with the output from running different commands inside the page to the right
    • First set of buttons under the “Setup” are used to check software pre-requirements, if everything needed is installed, to deploy A-Bench infrastructure using A-Bench infrastructure and to monitor the infrastructure using Grafana/Kubernetes dashboards after successful deployment
    • Second set of buttons under “Run” are used to configure which queries to be run and to run a sample ABench experiments after going to “Configuration” page and selecting the queries
    • Third set of buttons under “Analyse” are used to load the results ONLY AFTER running a sample experiment described in 3.Step.

    3. Step:

    • When you click the button “Configuration” under “Run” you will be forwarded to a new page
    • There are shown all 30 queries with explanation that can be run as an experiment as a check boxes
    • After selecting the desired one click “Save config” and under the field with all queries the chosen one will be shown
    • An environment variable will be created and after clicking “Run SRE with HIVE”https://github.com/”Run SRE with SPARK” this variable would be used

    4. Step:

    • After successfully running an experiment the results will be saved in:
    $ ~/wd/abench/a-bench/results/
    • On the homepage under “Run” by clicking on “Load results” a file explorer will open, navigate to ~/wd/abench/a-bench/results/, choose the experiment_tag_sample_qXX.zip file name to load the results and analyze them using density charts
    • If you want to load new results from different experiment repeat the previous step

    Additional Information

    • In the folder “all_executed_exp” will be stored all results of all executed experiments
    • In the folder “experiment_results” will be saved as a .csv tables the results from the experiment needed for the charts
    • In the folder “outputs” are two .txt files used for the output from all executed commands to be shown on the homepage
    • In the folder “scripts” are all necessary scripts for deploying and running the infrastructure
    • In the folder “templates” are all html pages
    • In the folder “static” are located all .css files for the styling of the pages
    • In folder “~/wd” will be downloaded everything necessary for the infrastructure from GitHub repository https://github.com/FutureApp/a-bench from Michael Czaja
    Visit original content creator repository https://github.com/o7ka4aln1ka/abench-management-console
  • Poople-Trends

    Poople-Trends

    Project made for ASCI: 20-21

    by Prajjwal Datir
    using Python3, Flask, bs4

    Problem Statement

    To run the application you can either use the flask command or python’s -m switch with Flask. Before you can do that you need to tell your terminal the application to work with by exporting the FLASK_APP environment variable.

    $ export FLASK_APP=app.py
    $ flask run
    
     * Running on http://127.0.0.1:5000/

    If you are on Windows, the environment variable syntax depends on command line interpreter.

    On Command Prompt:

    C:\path\to\app>set FLASK_APP=app.py
    And on PowerShell:
    PS C:\path\to\app> $env:FLASK_APP = "app.py"
    Alternatively you can use python -m flask:

    $ export FLASK_APP=app.py
    $ python -m flask run
     * Running on http://127.0.0.1:5000/

    This launches a very simple builtin server, which is good enough for testing but probably not what you want to use in production. For deployment options see Deployment Options.

    Now head over to http://127.0.0.1:5000/, and you should see the app running.

    for more info visit here

    Visit original content creator repository
    https://github.com/PrajjwalDatir/Poople-Trends