Collections

Blog Python Model code and SQLite Database.

From VSCode using SQLite3 Editor, show your unique collection/table in database, display rows and columns in the table of the SQLite database.

The database shows the search history of the stocks that were serached and keeps track of what stock the user entered into the serach bar.

From VSCode model, show your unique code that was created to initialize table and create test data.

Lists and Dictionaries

Blog Python API code and use of List and Dictionaries.

In VSCode using Debugger, show a list as extracted from database as Python objects.

In VSCode use Debugger and list, show two distinct example examples of dictionaries, show Keys/Values using debugger.

In the debugger terminal, the keys are in purple and the values are in orange. For example the key is ‘symbol’ and the value is ‘MSFT’ which is why it is purple and orange respectively.

APIs and JSON

Blog Python API code and use of Postman to request and respond with JSON.

In VSCode, show Python API code definition for request and response using GET, POST, UPDATE methods. Discuss algorithmic condition used to direct request to appropriate Python method based on request method. (line 16)

POST: I called the POST method to enter the name of the stock. When the user types in and enters the name of the stock the POST method is called. Line 16 has a condition that the stock symbol cannot be blank in order to operate the feature.

GET: I called the GET method to recieve data regarding the stock that is serached by the user. The GET method fetches data from the outsourced API named Alpha Vantage. It fetches the following data: open value, high value, close value, and volume of the specific stock.

In VSCode, show algorithmic conditions used to validate data on a POST condition.

In Postman, show URL request and Body requirements for GET, POST, and UPDATE methods. In Postman, show the JSON response data for 200 success conditions on GET, POST, and UPDATE methods.

In Postman, show the JSON response for error for 400 when missing body on a POST request.

In Postman, show the JSON response for error for 404 when providing an unknown user ID to a UPDATE request.

Frontend

Blog JavaScript API fetch code and formatting code to display JSON.

In Chrome inspect, show response of JSON objects from fetch of GET, POST, and UPDATE methods.

In the Chrome browser, show a demo (GET) of obtaining an Array of JSON objects that are formatted into the browsers screen.

In JavaScript code, describe fetch and method that obtained the Array of JSON objects.

async function fetchSearchHistory() {
    const response = await fetch(`http://127.0.0.1:8055/api/search/`);

    if (response.ok) {
        const historyData = await response.json();
        displaySearchHistory(historyData);
    } else {
        document.getElementById('searchHistory').innerHTML = 'Failed to fetch search history.';
    }
}

In JavaScript code, show code that performs iteration and formatting of data into HTML.

function displaySearchHistory(historyData) {
    let content = '<h2>Search History</h2><table>';
    content += '<tr><th>Symbol</th></tr>';
    historyData.forEach(item => {
        content += `<tr><td>${item.symbol}</td></tr>`;
    });
    content += '</table>';
    document.getElementById('searchHistory').innerHTML = content;
}

In the Chrome browser, show a demo (POST or UPDATE) gathering and sending input and receiving a response that show update. Repeat this demo showing both success and failure.

SUCCESS

FAILURE

In JavaScript code, show and describe code that handles success. Describe how code shows success to the user in the Chrome Browser screen.

if (response.ok) 
                const historyData = await response.json();
                displaySearchHistory(historyData);

In JavaScript code, show and describe code that handles failure. Describe how the code shows failure to the user in the Chrome Browser screen.

         
        document.getElementById('searchHistory').innerHTML = 'Failed to fetch search history.';
    

Optional/Extra, Algorithm Analysis

In the ML projects, there is a great deal of algorithm analysis. Think about preparing data and predictions.

Show algorithms and preparation of data for analysis. This includes cleaning, encoding, and one-hot encoding.

  • one-hot encoding converts each categorical value into a new categorical column and assigns a binary value of 1 or 0. In my ML the properties cut, color, and clarity apply to this.

Show algorithms and preparation for predictions.

  • The code demonstrates training the RandomForestRegressor with the preprocessed features and then using it to predict the diamond prices. It also evaluates the performance using the mean squared error (MSE) of the predictions.

Discuss concepts and understanding of Decision Tree analysis algorithms.

  • My code employs a RandomForestRegressor, which is an ensemble of Decision Trees.

The RandomForestRegressor is an ensemble machine learning algorithm used for regression tasks, which is part of the ensemble module in the scikit-learn library. Ensemble learning combines the predictions of several base estimators built with a given learning algorithm in order to improve generalizability and robustness over a single estimator.

How it works:

Ensemble of Decision Trees: A random forest is a collection (ensemble) of decision trees. In the context of regression, it uses multiple decision trees to predict a continuous target variable based on various input features.

Bagging: The algorithm uses the bagging (Bootstrap AGGregatING) technique to create an ensemble. Each tree in the forest is trained on a different bootstrap sample of the data, which is a sample taken with replacement from the original dataset. This helps in reducing the variance of individual trees, making the model more robust.

Random Feature Selection: When building individual trees, the random forest algorithm introduces additional randomness by selecting a random subset of features for splitting at each node. This decorrelates the trees and further reduces variance.

Aggregation: For regression tasks, the final prediction is usually the average of the predictions from all the individual trees in the forest. This aggregation helps to improve the predictive accuracy and control over-fitting.

Hyperparameters: Important hyperparameters for RandomForestRegressor include n_estimators (number of trees in the forest), max_features (the maximum number of features to consider when looking for the best split), max_depth (maximum depth of the tree), and others that control the growth of individual trees.

Versatility: Random forests work well with a large range of data items without the need for scaling and can handle thousands of input variables without variable deletion. They provide a pretty good indicator of feature importance as well.

graph

Deployment

  • Nginx configuration
  • moved CORS code from main.py to init.py
  • @token_required should be guarding certain endpoints
  • manage secret keys for CSRF
  • make sure to change port number when deploying