• LinkedIn
  • Join Us on Google Plus!
  • Subcribe to Our RSS Feed

Sushan7 is the blog admin. Welcome!! ' <!>

Thursday, February 2, 2023

Can ML be implemented on Smart Bins?

February 02, 2023 // by Sushan7 // 2 comments

Smart bins, also known as smart waste management systems, are an innovative solution to traditional waste collection methods. By integrating advanced technologies such as machine learning, smart bins have become a reliable and efficient way of predicting waste levels and improving the waste management process.



Machine learning algorithms can be used to predict waste levels in smart bins by analyzing data from sensors and cameras installed in the bins. The data includes the weight and volume of waste, type of waste, and the time it was disposed. The algorithms can use this data to create models that accurately predict the waste level in the bin and determine when the bin is likely to reach its maximum capacity.

Smart bins equipped with machine learning algorithms can help optimize waste collection routes and reduce the frequency of waste collection. By predicting waste levels, waste collection companies can prioritize the bins that need to be emptied first, reducing the number of empty bins on the street and improving the efficiency of waste collection.

Smart bins with machine learning algorithms can also help reduce waste and improve sustainability. By predicting waste levels, smart bins can encourage people to reduce waste and recycle more by providing feedback on the amount of waste they generate. Moreover, smart bins can be programmed to separate waste into recyclables, compostables, and landfill, helping to increase the recycling rate and reduce the amount of waste sent to landfills.

Smart bin prediction systems using machine learning can also provide valuable data for city planners and waste management companies. By analyzing data from smart bins, waste management companies can gain a better understanding of waste generation patterns, the types of waste generated in different areas, and the times when waste is generated. This information can help city planners make informed decisions about waste management infrastructure, such as the placement of bins and the frequency of collection.

In conclusion, smart bin prediction systems using machine learning are a crucial tool in improving waste management and reducing waste. With their ability to predict waste levels, optimize waste collection routes, reduce waste and increase sustainability, smart bins are helping to create a greener and cleaner future.

Guide to make the Best choice among Time Series Algorithms

February 02, 2023 // by Sushan7 // No comments



1. Long Short-Term Memory (LSTM) Neural Networks: LSTMs are a type of Recurrent Neural Network (RNN) that are well-suited for time series forecasting.

2. Autoregressive Integrated Moving Average (ARIMA) and Seasonal Autoregressive Integrated Moving Average (SARIMA): ARIMA and SARIMA are traditional time series forecasting methods that have been widely used for many years and are still popular today.

3. Prophet: Prophet is a forecasting model developed by Facebook that uses Bayesian techniques to model time series data with multiple trend and seasonality components.

4. Gradient Boosting Machines (GBM): GBMs are a type of decision tree-based ensemble method that can be used for time series forecasting by modeling the relationship between past observations and future outcomes.

5. Support Vector Regression (SVR): SVR is a type of regression algorithm that can be used for time series forecasting by modeling the relationship between past observations and future outcomes.

6. XGBoost: XGBoost is a type of gradient boosting algorithm that is well-suited for time series forecasting and has been widely used in many competitions and real-world applications.

Ultimately, the choice of algorithm will depend on the complexity of the data, the resources available for model training and prediction, and the desired accuracy and interpretability of the model.

In many of the cases in time series forecasting requires retain of information to give accurate and precise prediction in such cases algorithms in such cases where the retention of past data is required if algorithms like ARIMA/SARIMA is used the accuracy and precision will take a toll as it will prediction neglecting the past data which will eventually lead to loss of characteristic and variability of data. ARIMA (AutoRegressive Integrated Moving Average) and SARIMA (Seasonal AutoRegressive Integrated Moving Average) are traditional time series forecasting methods that have been widely used for many years. However, in recent years, with the advancement of deep learning models, such as LSTMs, these traditional methods may sometimes be less accurate compared to more advanced techniques.

The accuracy of ARIMA and SARIMA depends on the quality of the data and the skill of the modeler in selecting the right parameters and order of differencing. If the time series data is relatively simple, with clear patterns and trend, these traditional methods may still provide accurate results. However, if the data is more complex, with multiple underlying patterns and noise, LSTMs and other deep learning models may provide more accurate predictions.

It is worth noting that ARIMA and SARIMA have the advantage of being easier to interpret and implement compared to deep learning models, and they can still provide good results for certain time series forecasting problems. Ultimately, the choice of model will depend on the complexity of the data and the resources available for model training and prediction.

How to use Time Series forecasting in Machine Learning

February 02, 2023 // by Sushan7 // No comments

A time series is a sequence of observations recorded over a certain period. A simple example of time series is how we come across different temperature changes day by day or in a month. The tutorial will give you a complete sort of understanding of what is time-series data, what methods are used to forecast time series, and what makes time series data so special a complex topic in the field of data science.



Timeseries forecasting in simple words means to forecast or to predict the future value(e.g.-stock price) over a period of time. There are different approaches to predict the value, consider an example there is a company XYZ records the website traffic in each hour and now wants to forecast the total traffic of the coming hour.

A different person can have a different perspective like one can say find the mean of all observations, one can have like take mean of recent two observations, one can say like give more weightage to current observation and less to past, or one can say use interpolation. There are different methods to forecast the values. While Forecasting time series values, 3 important terms need to be taken care of and the main task of time series forecasting is to forecast these three terms.

1) Seasonality - Seasonality is a simple term that means while predicting a time series data there are some months in a particular domain where the output value is at a peak as compared to other months. for example if you observe the data of tours and travels companies of past 3 years then you can see that in November and December the distribution will be very high due to holiday season and festival season. So while forecasting time series data we need to capture this seasonality.

2) Trend - The trend is also one of the important factors which describe that there is certainly increasing or decreasing trend time series, which actually means the value of organization or sales over a period of time and seasonality is increasing or decreasing.

3) Unexpected Events - Unexpected events mean some dynamic changes occur in an organization, or in the market which cannot be captured. for example a current pandemic we are suffering from, and if you observe the Sensex or nifty chart there is a huge decrease in stock price which is an unexpected event that occurs in the surrounding. Methods and algorithms are using which we can capture seasonality and trend But the unexpected event occurs dynamically so capturing this becomes very difficult.

Friday, March 4, 2022

Green Computing - The solution to reduce carbon footprints in IT industries.

March 04, 2022 // by Sushan7 // No comments

Greenhouse gas emmisons of a laptop

The production of a generic laptop results in generation of about 130-200 lbs. CO2e and over its life cycle, it will have generated around twice that amount of total CO2e greenhouse gas emissions. A 10-year-old tree absorbs almost 48.5 lbs. of CO2 per year. Taking the average 75.2 lbs. as the amount produced, this means that for a laptop alone, there need to be 2 full grown trees to absorb the emissions. Meaning that one would have to plant 2 trees each year for the carbon footprint of his/her laptop to be 0 in 10 years’ time.

Performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea, according to new research.

  • As our computing capabilities and the amount of data that we generate each year continue to grow, so do their contributions to carbon emissions.
  • Even though these emissions are not direct they are abstract in nature.
  • Data centers contribute to carbon emissions through electricity consumption, both in terms of powering computers for computing and for long-term data storage, resulting in an estimated 100 megatonnes of CO2 emissions produced per year. On considering the important parameters like how long the algorithm runs for, what hardware (CPU, GPU, memory) is used one can estimate the carbon footprint in grams of C02 equivalent.
  • The computing technology and high-powered computing is here to stay because of its many benefits. So, adopting green computing is a great solution to tackle emissions of greenhouse gases.


So, how can an organization implement green computing in order to reduce their carbon footprint?
Some steps to be implemented are as follows:
Proclamation of the Green Intentions - It is always best to begin Green IT initiatives by communicating intentions to adopt an environment-friendly IT infrastructure. The push for energy efficiency should be cascaded down to every staff, setting the stage for collaboration between various departments.
Measurement of Current Carbon Footprints Produced by IT Components - Where the company stands in terms of carbon footprint brought about by information technology services, is an important information to be known. Quickly establish a carbon footprint reference point. Check on the power usage in the IT center and compare it with existing power efficiency standards and metrics for industry.
Usage of More Efficient Computer Applications- By using more powerful computer applications, your IT systems can better deal with inefficiencies. Besides, faster software spares the servers from regularly operating at maximum capacity, thereby consuming lesser power.
Usage of More Efficient Cooling Systems: - To reduce your CRAC (Computer Room Air Conditioning) power consumption for green computing, invest in supplemental cooling systems that are placed in between the rows of servers in data center.
Result Monitoring and Continuous IT Optimization: - Lastly, you should always check the results of green IT initiatives. Compare this data with the benchmarks and metrics that is set for the company. A good example is checking total power consumption for each month.

Hope that further implementations of Green Computing will soon be seen and we all will move forward with the goal of reductionm of carbon footprints in Computing via green methods.

Tuesday, September 14, 2021

Role of data structure in Project development:

September 14, 2021 // by Sushan7 // 5 comments

Data structures plays a very important role in a project development as data structure is the heart of any operation to be performed without data structure a project will be very bad and the user could only perform basic operations.



•Helps to solve a problem more efficiently:
Suppose a person develops a software module for solving business problems They may involve all software development lifecycle including requirement analysis, coding, design and maintenance of software. To perform the following operations data structure pplays a very important role. Also when the person develops the software and if he does not know the concepts of data structures properly then he will not be able to use the api(The Application programming interface (API) hides the internal implementation of data structure and the algorithm using object oriented programming principals) properly
How to choose correct data structure for solving a problem? When selecting a data structure to solve a problem, you should follow these steps. 1. Analyze your problem to determine the basic operations that must be supported. Examples of basic operations include inserting a data item into the data structure, deleting a data item from the data structure, and finding a specified data item. 2. Quantify the resource constraints for for each operation. 3. Select the data structure that best meets these requirements. This three-step approach to selecting a data structure operationalizes a data-centered view of choosing the data structure. The first concern is for the data and the operations to be performed on them, the next concern is the representation for those data, and the final concern is the implementation of that representation. To select data structure based on Implementation centered view of implementation following question should be asked

1. Are all data items inserted into the data structure at the beginning, or are insertions interspersed with other operations? Static applications (where the data are loaded at the beginning and never change) typically get by with simpler data structures to get an efficient implementation, while dynamic applications often require something more complicated. 2. Can data items be deleted? If so, this will probably make the implementation more complicated. 3. Are all data items processed in some well-defined order, or is search for specific data items allowed? “Random access” search generally requires more complex data structures.

Conclusion:
I would like to conclude by saying that data structures and algorithm indeed plays a very important role in project development as since it can make a project offer many functionalities also a person whose data structure and algorithm concepts are clear will be of great importance in project development.

The importance of Data Structures

September 14, 2021 // by Sushan7 // No comments

A software has two parts front end and back end. The front end provides an interface and the back end is called a database which contains records of customers. There can be million or trillion of customers. If we have to find out the record of a particular or a number of customers, it is done by a searching method which is an operation on the data structure.


If any software is to be run, at first it is fed into computer memory. In computer memory jobs are entered into queues. And the queue is also a concept of data structure. As we know when jobs and processes are entered then queues are formed. These queues can have too many jobs or processes. In queues, jobs are processed in the same order as they entered. If any job is created it is placed at the end of a queue. Suppose we have to add or delete any job in any order then the concept of the queue will be failed and the concept of link list will be used. If data is to be arranged alphabetically or numerically then it is done by sorting method which is an operation on the data structure. Data structure provides the right way to organize information in the digital space. If any data is to be stored in a hierarchical fashion then the concept of tree is used. It stores data in a hierarchical fashion. In windows, all the directories are stored using the concept of trees.

Real life applications of DSA in Project development

September 14, 2021 // by Sushan7 // 1 comment

DSA is a wide topic and has its implementation in various fields.Here we can understand the usage of Data structures in project development.


Arrays
Arrays are the simplest data structures that stores items of the same data type. A basic application of Arrays can be storing data in tabular format.
Applications of arrays for project development are as follows:
1.Arrangement of leader-board of a game can be done simply through arrays to store the score and arrange them in descending order to clearly make out the rank of each player in the game.
2.A simple question Paper is an array of numbered questions with each of them assigned to some marks.

Linked List
A linked list is a sequence data structure, which connects elements, called nodes, through links.
Applications of linked lists for project development are as follows:
1.Images are linked with each other. So, an image viewer software uses a linked list to view the previous and the next images using the previous and next buttons.
2.Web pages can be accessed using the previous and the next URL links which are linked using linked list.
3.The music players also use the same technique to switch between music.

Stacks
A stack is a data structure which uses LIFO order.
Applications of stack for project development are as follows:
1.Syntaxes in languages are parsed using stacks.
2.It is used in many virtual machines like JVM.
3.Forward – backward surfing in browser
4.History of visited websites

Queue
A queue is a data structure that uses FIFO order.
Applications of queue for project development are as follows:
1.To handle congestion in networking queue can be used.
2.Data packets in communication are arranged in queue format.
3.Sending an E-mail, it will be queued.
4.Uploading and downloading photos