We are looking for an innovative and creative Artificial Intelligence/Machine Learning Engineer with experience in writing codes and algorithms and building complex neural networks through various programming languages.

 

Requirements:

 

Technical Expertise

  1. Frameworks: Pytorch, Scikit-learn, Xgboost, 

  2. Libraries: Pandas, SpaCy, NLTK, Google Chart API, Scikit-learn, NumPy, Matplotlib, PIL, OpenCV, Asyncio, SciPy, Ggplot2, Dash, Plotly, Streamlit, Gradio, Theano 

  3. Packages: CUDA, OpenCL

  4. Neural Networks: Convolutional and Recurrent Neural Networks (LSTM, GRU, etc.), Autoencoders (VAE, DAE SAE, etc.), Generative Adversarial Networks (GANs), Deep Q-Network (DQN ), Feedforward Neural Network, Radial Basis Function Network, Modular Neural Network

  5. DBMS: Apache, Pyspark, Kafka, AWS Infrastructure, Dask

  6. Cloud: Microsoft Azure, AWS, Google Cloud

  7. BI & Visualization: Kibana, SAS, Power BI, Amazon QuickSight, Qlik, Microsoft STrategy, Tableau

 

  • Mathematics: Matrices, Vectors, Derivatives, Integrals, Statistics (Mean, Standard Deviations, and Gaussian Distributions), Probability (Naive Bayes, Gaussian Mixture Models, and Hidden Markov Models)

  • Programming: Python, Python 3, SQL, JavaScript, C#, TypeScript, CoffeeScript, Bash

  • Big Data: Hadoop, Spark, Cassandra, MongoDB

  • Data Science: Acquisition, Preparation, Data Analysis, Data manipulation, 

  • Machine Learning: Scikit learn, Supervised Learning, Unsupervised learning Reinforcement learning

  • Deep Learning: Tensorflow, Keras , neural Networks, CNN, RNN, GAN, LSTMs

  • Business Intelligence: Tableau, Qlikview, PowerBI

  • Signal Processing

    1. Solve problems using Signal Processing

    2. Advanced Signal Processing Algorithms such as Wavelets, Shearlets, Curvelets, and Bandlets is a bonus.

  • Rapid prototyping: Launching products quickly in the market. The process is to develop a scale model and allows engineers to quickly develop a prototype and test it out.

  • Natural Language Processing (NLP): Libraries such as Gensim and NLTK.

  • Knowledge of algorithm theory to understand Gradient Descent, Convex Optimisation, Lagrange, Quadratic Programming, Partial Differential Equations, and Summations. 

  • Neural network architectures: Used for coding tasks that are arduous for human effort, this has been extremely useful in areas such as translation, speech recognition, and image classification, and so on.

 

 

Responsibilities:

1. Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying AI/ML techniques to address challenges and improve processes.

2. Design and develop AI/ML models, algorithms, and systems to extract insights, predict outcomes, and automate tasks.

3. Conduct data preprocessing, feature engineering, and exploratory data analysis to prepare data for model training and evaluation.

4. Train, validate, and fine-tune ML models using a variety of techniques, including supervised and unsupervised learning, deep learning, and reinforcement learning.

5. Apply statistical analysis and experimental design to evaluate model performance, interpret results, and optimize models for accuracy, precision, and efficiency.

6. Deploy ML models into production environments, ensuring scalability, reliability, and maintainability.

7. Collaborate with software engineers to integrate AI/ML capabilities into existing systems or develop new software applications.

8. Continuously monitor and evaluate the performance of deployed ML models, identifying opportunities for model improvement or retraining.

9. Stay up-to-date with the latest AI/ML research, technologies, frameworks, and tools, and propose their adoption when appropriate.

 Collaborate with data engineers to ensure efficient data storage, retrieval, and processing for AI/ML applications.

 Contribute to the development and maintenance of a robust AI/ML infrastructure, including data pipelines, model repositories, and version control systems.

 Provide technical guidance and mentorship to junior team members, sharing best practices and promoting a culture of continuous learning and improvement.

 

Trial Task

 

  1. Explain the bias-variance trade-off in machine language. How does it impact model performance?

  2. Explain the concept of ROC curves and AUC(Area Under the Curve) in binary classification.

  3. Develop a clustering algorithm from scratch using a programming language of your choice. Apply the algorithm to a dataset and analyze the clustering results.

Leave a Reply

Your email address will not be published. Required fields are marked *