Graduate School
-
Minesweeper using Reinforcement Learning¶Import Libraires¶In [1]: import pandas as pdimport numpy as npfrom itertools import productimport randomfrom random import choicefrom collections import namedtuplefrom scipy.signal import convolve2dfrom tqdm import trangefrom time import sleepimport matplotlib.pyplot as pltfrom IPython.display import clear_outputimport ipywidgets as widgets%matplotlib..
Mine Sweeper GameMinesweeper using Reinforcement Learning¶Import Libraires¶In [1]: import pandas as pdimport numpy as npfrom itertools import productimport randomfrom random import choicefrom collections import namedtuplefrom scipy.signal import convolve2dfrom tqdm import trangefrom time import sleepimport matplotlib.pyplot as pltfrom IPython.display import clear_outputimport ipywidgets as widgets%matplotlib..
2024.09.10 -
1. Game ruleThere are 100 cardsTwo playersplayer-0 (AI), player-1 (human)Player-turn sequence: player0-player1-player0-player1- ….Each player will draw up to 3 cards in his turnThe player who draws the 100th card (the last card) wins! 2. Gameplay# of drawn cards = 0, Player-0 draws 3 cards# of drawn cards = 3, Player-1 draws 1 card# of drawn cards = 4, Player-0 draws 1 card# of drawn cards = 5, ..
Last Card Game1. Game ruleThere are 100 cardsTwo playersplayer-0 (AI), player-1 (human)Player-turn sequence: player0-player1-player0-player1- ….Each player will draw up to 3 cards in his turnThe player who draws the 100th card (the last card) wins! 2. Gameplay# of drawn cards = 0, Player-0 draws 3 cards# of drawn cards = 3, Player-1 draws 1 card# of drawn cards = 4, Player-0 draws 1 card# of drawn cards = 5, ..
2024.09.10 -
1. DP using greedyfrom pyamaze import maze, agentimport numpy as np# Load the Mazesize = 5m=maze(size,size)m.CreateMaze(loadMaze="maze.csv")# create the environment modelstates = list(m.maze_map.keys())actions = ['E','N', 'W', 'S']# define how an action changes a statedef step(state, action): x, y = state if action=='E': y += 1 elif action=='W': y -= 1 elif action=='N':..
Solving Maze using Reinforcement Learning1. DP using greedyfrom pyamaze import maze, agentimport numpy as np# Load the Mazesize = 5m=maze(size,size)m.CreateMaze(loadMaze="maze.csv")# create the environment modelstates = list(m.maze_map.keys())actions = ['E','N', 'W', 'S']# define how an action changes a statedef step(state, action): x, y = state if action=='E': y += 1 elif action=='W': y -= 1 elif action=='N':..
2024.09.10 -
Import Library¶In [1]: import numpy as npfrom sklearn.datasets import load_breast_cancerfrom sklearn.decomposition import PCAimport matplotlib.pyplot as pltimport tensorflow as tf%matplotlib inline /home/pmi-minos-3090-single/anaconda3/lib/python3.9/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and ={np_minversion} and Data Load¶In [2]: data = loa..
AutoEncoder ImplementationImport Library¶In [1]: import numpy as npfrom sklearn.datasets import load_breast_cancerfrom sklearn.decomposition import PCAimport matplotlib.pyplot as pltimport tensorflow as tf%matplotlib inline /home/pmi-minos-3090-single/anaconda3/lib/python3.9/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and ={np_minversion} and Data Load¶In [2]: data = loa..
2024.09.10 -
MDP example¶State Value Function Changes in Policy Iterations¶Import Library¶In [1]: import numpy as np Grid World¶In [2]: BOARD_ROWS = 3 # grid world 세로BOARD_COLS = 3 # grid world 가로GAMMA = 1.0POSSIBLE_ACTIONS = [0, 1, 2, 3] # 좌, 우, 상, 하ACTIONS = [(-1, 0), (1, 0), (0, -1), (0, 1)] # 좌표로 나타낸 행동REWARDS = [] Environment¶In [3]: class Env: def __init__(self): self.heig..
Markov Decision Process ExampleMDP example¶State Value Function Changes in Policy Iterations¶Import Library¶In [1]: import numpy as np Grid World¶In [2]: BOARD_ROWS = 3 # grid world 세로BOARD_COLS = 3 # grid world 가로GAMMA = 1.0POSSIBLE_ACTIONS = [0, 1, 2, 3] # 좌, 우, 상, 하ACTIONS = [(-1, 0), (1, 0), (0, -1), (0, 1)] # 좌표로 나타낸 행동REWARDS = [] Environment¶In [3]: class Env: def __init__(self): self.heig..
2024.09.10 -
Import Library¶In [1]: import FinanceDataReader as fdrfrom sklearn.preprocessing import MinMaxScalerimport torchimport timeimport matplotlib.pyplot as plt%matplotlib inline Define LSTM model¶In [2]: class LSTM(torch.nn.Module) : def __init__(self, num_classes, input_size, hidden_size, num_layers, seq_length, device) : super(LSTM, self).__init__() self.num_classes = n..
LSTM을 이용한 주식 가격 예측Import Library¶In [1]: import FinanceDataReader as fdrfrom sklearn.preprocessing import MinMaxScalerimport torchimport timeimport matplotlib.pyplot as plt%matplotlib inline Define LSTM model¶In [2]: class LSTM(torch.nn.Module) : def __init__(self, num_classes, input_size, hidden_size, num_layers, seq_length, device) : super(LSTM, self).__init__() self.num_classes = n..
2024.09.10 -
탐색적 데이터 분석과 데이터 시각화 실습¶In [1]: !pip install -q --upgrade matplotlib /bin/bash: pip: command not found들어가기¶‘통계학자’ 나이팅게일의 ‘로즈 다이어그램’In [2]: from packaging import versionimport matplotlib as mplimport matplotlib.pyplot as pltimport matplotlib_inline.backend_inlineimport numpy as npimport pandas as pdassert version.parse(mpl.__version__) >= version.Version("3.5"), ( "에러가 난다면 첫..
Exploratory data analysis and visualization탐색적 데이터 분석과 데이터 시각화 실습¶In [1]: !pip install -q --upgrade matplotlib /bin/bash: pip: command not found들어가기¶‘통계학자’ 나이팅게일의 ‘로즈 다이어그램’In [2]: from packaging import versionimport matplotlib as mplimport matplotlib.pyplot as pltimport matplotlib_inline.backend_inlineimport numpy as npimport pandas as pdassert version.parse(mpl.__version__) >= version.Version("3.5"), ( "에러가 난다면 첫..
2024.09.10 -
hr_data_binary_classification¶IBM HR data¶https://www.kaggle.com/datasets/pavansubhasht/ibm-hr-analytics-attrition-dataset구글 드라이브 연동¶In [ ]: from google.colab import drivedrive.mount('/content/drive') Mounted at /content/drive1. 문제 파악 및 목표 설정¶2. 데이터 수집 및 전처리¶In [ ]: import numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsimport missingno as ms..
IBM HR data Binary Classificationhr_data_binary_classification¶IBM HR data¶https://www.kaggle.com/datasets/pavansubhasht/ibm-hr-analytics-attrition-dataset구글 드라이브 연동¶In [ ]: from google.colab import drivedrive.mount('/content/drive') Mounted at /content/drive1. 문제 파악 및 목표 설정¶2. 데이터 수집 및 전처리¶In [ ]: import numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsimport missingno as ms..
2024.09.10 -
Implement CNN Model¶Import Library¶In [1]: import scipy.io as sioimport matplotlib.pyplot as pltimport tensorflow as tfimport numpy as np%matplotlib inline 2022-12-07 17:52:14.633420: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To tur..
Face RecognitionImplement CNN Model¶Import Library¶In [1]: import scipy.io as sioimport matplotlib.pyplot as pltimport tensorflow as tfimport numpy as np%matplotlib inline 2022-12-07 17:52:14.633420: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To tur..
2024.09.10 -
Import Library¶In [1]: import tensorflow as tfimport numpy as npimport matplotlib.pyplot as plt%matplotlib inline 2022-11-23 17:10:00.971077: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `..
Image PredictionImport Library¶In [1]: import tensorflow as tfimport numpy as npimport matplotlib.pyplot as plt%matplotlib inline 2022-11-23 17:10:00.971077: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `..
2024.09.10 -
In [1]: import numpy as npimport tensorflow as tfimport timenp.random.seed(101) 2022-10-12 17:13:46.461742: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TFENABLEONEDℕOPTS=0.build you..
Linear Classifier 02In [1]: import numpy as npimport tensorflow as tfimport timenp.random.seed(101) 2022-10-12 17:13:46.461742: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TFENABLEONEDℕOPTS=0.build you..
2024.09.10 -
In [1]: import numpy as npnp.random.seed(101) build your own data¶x_train: [12, 4] y_train: [12, ] x_test: [4, 4] y_test: [4, ][1, 0, 0, 0] --> 0 [0, 1, 0, 0] --> 0 [0, 0, 1, 0] --> 1 [0, 0, 0, 1] --> 1In [2]: x_train = np.zeros(shape=[12, 4], dtype=np.float32)y_train = np.random.randint(0, 4, [12, ])print(x_train)print(y_train) [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.] ..
Linear Classifier 01In [1]: import numpy as npnp.random.seed(101) build your own data¶x_train: [12, 4] y_train: [12, ] x_test: [4, 4] y_test: [4, ][1, 0, 0, 0] --> 0 [0, 1, 0, 0] --> 0 [0, 0, 1, 0] --> 1 [0, 0, 0, 1] --> 1In [2]: x_train = np.zeros(shape=[12, 4], dtype=np.float32)y_train = np.random.randint(0, 4, [12, ])print(x_train)print(y_train) [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.] ..
2024.09.10