博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
DeepLearning.ai-Week1-Convolution+model+-+Step+by+Step
阅读量:6224 次
发布时间:2019-06-21

本文共 21708 字,大约阅读时间需要 72 分钟。

1 - Import Packages

import numpy as np import h5pyimport mathimport matplotlib.pyplot as plt%matplotlib inline

2 - Global Parameters Setting

plt.rcParams["figure.figsize"] = (5.0, 4.0) # 设置figure_size尺寸plt.rcParams["image.interpolation"] = "nearest" # 设置插入风格plt.rcParams["image.cmap"] = "gray" # 设置颜色风格 # 动态重载模块,模块修改时无需重新启动%load_ext autoreload %autoreload 2 # 随机数种子np.random.seed(1)

3 - Convolutional Neural Networks

3.1 - Zero-padding

  对输入张量X指定pad大小,对其进行zero的填充。运用numpy模块中的pad方法可以简单的实现。

# GRADED FUNCTION: zero_paddef zero_pad(X, pad):    """    Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,     as illustrated in Figure 1.        Argument:    X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images    pad -- integer, amount of padding around each image on vertical and horizontal dimensions        Returns:    X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)    """        ### START CODE HERE ### (≈ 1 line)    # np.pad第一个参数为pad目标张量,第二个参数为每一个维度要pad的两边的大小,第三个参数为pad的模式,第四个参数对应第二个参数     # 为每一个维度每一边要pad的值     X_pad = np.pad(X,         ((0, 0), (pad, pad), (pad, pad), (0, 0)),         "constant",         constant_values=((0, 0), (0, 0), (0, 0), (0, 0)))    ### END CODE HERE ###        return X_pad
np.random.seed(1) # 随机数种子x = np.random.randn(4, 3, 3, 2) # 随机一个输入变量x_pad = zero_pad(x, 2) # 对输入变量x进行zero_padprint ("x.shape =", x.shape) print ("x_pad.shape =", x_pad.shape)print ("x[1,1] =", x[1,1])print ("x_pad[1,1] =", x_pad[1,1])fig, axarr = plt.subplots(1, 2)axarr[0].set_title('x')axarr[0].imshow(x[0,:,:,0])axarr[1].set_title('x_pad')axarr[1].imshow(x_pad[0,:,:,0])
Result: x.shape = (4, 3, 3, 2)x_pad.shape = (4, 7, 7, 2)x[1,1] = [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808  0.53035547]]x_pad[1,1] = [[ 0.  0.] [ 0.  0.] [ 0.  0.] [ 0.  0.] [ 0.  0.] [ 0.  0.] [ 0.  0.]]Out[7]:

3.2 - Single step of convolution

  对于输入张量a_slice_prev,求出其与其相同规模的卷积核W和偏置项b计算之后的结果。python支持张量相乘,因此相乘之后求和加上偏置项即可得结果。

# GRADED FUNCTION: conv_single_stepdef conv_single_step(a_slice_prev, W, b):    """    Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation     of the previous layer.        Arguments:    a_slice_prev -- slice of input data of shape (f, f, n_C_prev)    W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)    b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)        Returns:    Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data    """    ### START CODE HERE ### (≈ 2 lines of code)    # Element-wise product between a_slice and W. Do not add the bias yet.    s = a_slice_prev * W    # Sum over all entries of the volume s.    Z = np.sum(s)    # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.    Z = Z + float(b)    ### END CODE HERE ###    return Z
np.random.seed(1) # 随机相同规模的a_slice_prev以及卷积核Wa_slice_prev = np.random.randn(4, 4, 3)W = np.random.randn(4, 4, 3)b = np.random.randn(1, 1, 1)Z = conv_single_step(a_slice_prev, W, b)print("Z =", Z)
Result:Z = -6.99908945068

 3.3 - Convolutional Neural Networks - Forward pass

  输入规模与输出规模的关系式如下:

$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$

$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_C = \text{number of filters used in the convolution}$$

# GRADED FUNCTION: conv_forwarddef conv_forward(A_prev, W, b, hparameters):    """    Implements the forward propagation for a convolution function        Arguments:    A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)    W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)    b -- Biases, numpy array of shape (1, 1, 1, n_C)    hparameters -- python dictionary containing "stride" and "pad"            Returns:    Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)    cache -- cache of values needed for the conv_backward() function    """        ### START CODE HERE ###    # Retrieve dimensions from A_prev's shape (≈1 line)     # 获取输入张量的维度    # m为数据量      # n_H_prev为输入张量的高      # n_W_prev为输入张量的宽      # n_C_prev为输入张量的通道数    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape        # Retrieve dimensions from W's shape (≈1 line)     # 获取卷积核的规模     # f为卷积核的高/宽(截面为正方形的卷积核)     # n_C_prev为卷积核通道数=当前输入张量的通道数     # n_C为卷积核数量=输入张量经过该卷积层之后的通道数    (f, f, n_C_prev, n_C) = W.shape        # Retrieve information from "hparameters" (≈2 lines)    # 获取参数     # 步长&填充边界大小     stride = hparameters["stride"]    pad = hparameters["pad"]        # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)    # 根据上述公式计算输入张量经过卷积之后的规模(高&宽)    n_H = math.floor((n_H_prev-f+2*pad)/stride) + 1    n_W = math.floor((n_W_prev-f+2*pad)/stride) + 1        # Initialize the output volume Z with zeros. (≈1 line)    # 初始化输出变量为全0张量(规模通过上面式子计算出来)    Z = np.zeros(shape=(m, n_H, n_W, n_C))        # Create A_prev_pad by padding A_prev    # 对于输入张量进行0填充,使其通过卷积之后规模不变    A_prev_pad = zero_pad(A_prev, pad)       # 对于每一个输入张量    for i in range(m):                              # loop over the batch of training examples      # 取出每一个输出张量         a_prev_pad = A_prev_pad[i]                  # Select ith training example's padded activation      # 循环高度的每一行         for h in range(n_H):                        # loop over vertical axis of the output volume        # 循环每一行的宽度的每一列             for w in range(n_W):                    # loop over horizontal axis of the output volume           # 循环每一列的每一个通道                 for c in range(n_C):                # loop over channels (= #filters) of the output volume                                        # Find the corners of the current "slice" (≈4 lines)             # 逆向计算出对于每一个对应的输出张量的点影响其的输入张量的范围                    vert_start = h * stride                    vert_end = vert_start + f                    horiz_start = w * stride                    horiz_end = horiz_start + f                                        # Use the corners to define the (3D)​​                    # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)             # 对于影响该输出张量的输入张量子张量进行卷积运算(调用上面已经实现的方法)                     Z[i, h, w, c] = conv_single_step(a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :], W[:, :, :, c], b[:, :, :, c])                                            ### END CODE HERE ###        # Making sure your output shape is correct    assert(Z.shape == (m, n_H, n_W, n_C))        # Save information in "cache" for the backprop    cache = (A_prev, W, b, hparameters)        return Z, cache
np.random.seed(1)A_prev = np.random.randn(10,4,4,3)W = np.random.randn(2,2,3,8)b = np.random.randn(1,1,1,8)hparameters = {
"pad" : 2, "stride": 2}Z, cache_conv = conv_forward(A_prev, W, b, hparameters)print("Z's mean =", np.mean(Z))print("Z[3,2,1] =", Z[3,2,1])print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
Result:Z's mean = 0.0489952035289Z[3,2,1] = [-0.61490741 -6.7439236  -2.55153897  1.75698377  3.56208902  0.53036437  5.18531798  8.75898442]cache_conv[0][1][2][3] = [-0.20075807  0.18656139  0.41005165]

 4 - Pooling layer

4.1 - Forward Pooling

  实现MAX-POOL和AVG-POOL两个方法。没有padding,因此对于输出张量规模和输入帐帘规模有如下关系式:

$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$

$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
$$ n_C = n_{C_{prev}}$$

# GRADED FUNCTION: pool_forwarddef pool_forward(A_prev, hparameters, mode = "max"):    """    Implements the forward pass of the pooling layer        Arguments:    A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)    hparameters -- python dictionary containing "f" and "stride"    mode -- the pooling mode you would like to use, defined as a string ("max" or "average")        Returns:    A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)    cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters     """        # Retrieve dimensions from the input shape    # 获取输入张量的各个维度规模    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape        # Retrieve hyperparameters from "hparameters"    # 获取参数f(池化核大小)以及stride(步长)    f = hparameters["f"]    stride = hparameters["stride"]        # Define the dimensions of the output    # 根据输入张量规模及上述公式计算出输出张量规模    n_H = int(1 + (n_H_prev - f) / stride)    n_W = int(1 + (n_W_prev - f) / stride)    n_C = n_C_prev        # Initialize output matrix A    # 根据计算出的输出张量规模初始化输出张量为全0张量    A = np.zeros((m, n_H, n_W, n_C))                      ### START CODE HERE ###    # 循环每一个输出张量    for i in range(m):                           # loop over the training examples      # 循环输出张量高的每一行        for h in range(n_H):                     # loop on the vertical axis of the output volume        # 循环输出张量每一行的宽的每一列             for w in range(n_W):                 # loop on the horizontal axis of the output volume           # 循环输出张量的每一列的每一个通道                 for c in range (n_C):            # loop over the channels of the output volume                                        # Find the corners of the current "slice" (≈4 lines)             # 求出影响该输出张量位置的输入张量子张量位置                    vert_start = h * stride                    vert_end = vert_start + f                    horiz_start = w * stride                    horiz_end = horiz_start + f                                        # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)                    # 切割出影响该输出张量位置的输入张量子张量             a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]                                        # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.                    if mode == "max": # 如果是MAX-POOL                        A[i, h, w, c] = np.max(a_prev_slice) # 取输入张量子张量的最大值                    elif mode == "average": # 如果是AVG-POOL                        A[i, h, w, c] = np.average(a_prev_slice) # 取输入张量子张量的平均值        ### END CODE HERE ###        # Store the input and hparameters in "cache" for pool_backward()    cache = (A_prev, hparameters)        # Making sure your output shape is correct    assert(A.shape == (m, n_H, n_W, n_C))        return A, cache
np.random.seed(1)A_prev = np.random.randn(2, 4, 4, 3)hparameters = {
"stride" : 1, "f": 4}A, cache = pool_forward(A_prev, hparameters)print("mode = max")print("A =", A)print()A, cache = pool_forward(A_prev, hparameters, mode = "average")print("mode = average")print("A =", A)
Result:mode = maxA = [[[[ 1.74481176  1.6924546   2.10025514]]] [[[ 1.19891788  1.51981682  2.18557541]]]]mode = averageA = [[[[-0.09498456  0.11180064 -0.14263511]]] [[[-0.09525108  0.28325018  0.33035185]]]]

 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)

5.1 - Convolutional layer backward pass

5.1.1 - Computing dA

  对于确定的卷积核$W_c$以及给定的训练输入样本,有如下公式:

$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$

da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
5.1.2 - Computing dW

  计算$dW_c$,有如下公式:

$$ dW_c  += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw}  \tag{2}$$

  其中$a_slice$是用来激活产生$Z_{ij}$的相关输入。

dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
5.1.3 - Computing db

  对于确定的卷积核$W_c$,有如下公式:

$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$

db[:,:,:,c] += dZ[i, h, w, c]

 

def conv_backward(dZ, cache):    """    Implement the backward propagation for a convolution function        Arguments:    dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)    cache -- cache of values needed for the conv_backward(), output of conv_forward()        Returns:    dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),               numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)    dW -- gradient of the cost with respect to the weights of the conv layer (W)          numpy array of shape (f, f, n_C_prev, n_C)    db -- gradient of the cost with respect to the biases of the conv layer (b)          numpy array of shape (1, 1, 1, n_C)    """        ### START CODE HERE ###    # Retrieve information from "cache"    # 通过缓存获取输入张量、卷积核、偏置项和参数字典    (A_prev, W, b, hparameters) = cache        # Retrieve dimensions from A_prev's shape    # 获取输入张量规模    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape        # Retrieve dimensions from W's shape    # 获取卷积核规模    (f, f, n_C_prev, n_C) = W.shape        # Retrieve information from "hparameters"    # 从参数字典解析出参数stride和pad    stride = hparameters["stride"]    pad = hparameters["pad"]        # Retrieve dimensions from dZ's shape    # 获取dZ的规模    (m, n_H, n_W, n_C) = dZ.shape        # Initialize dA_prev, dW, db with the correct shapes    # 根据A_prev, W, b规模初始化对应梯度张量dA_prev, dW, db(全0)    dA_prev = np.zeros(A_prev.shape)                               dW = np.zeros(W.shape)    db = np.zeros(b.shape)    # Pad A_prev and dA_prev    # 用0填充输入张量以及对应的梯度张量    A_prev_pad = zero_pad(A_prev, pad)    dA_prev_pad = zero_pad(dA_prev, pad)       # 对于每一个输入张量    for i in range(m):                       # loop over the training examples                # select ith training example from A_prev_pad and dA_prev_pad        a_prev_pad = A_prev_pad[i] # 取出每一个输入张量        da_prev_pad = dA_prev_pad[i] # 取出对应的每个输入梯度张量             # 循环每一个dZ的高的每一行        for h in range(n_H):                   # loop over vertical axis of the output volume         # 循环每一行的宽的每一列            for w in range(n_W):               # loop over horizontal axis of the output volume           # 循环每一列的每一个通道                for c in range(n_C):           # loop over the channels of the output volume                                        # Find the corners of the current "slice"             # 去除对应影响该张量位置的输入张量的子张量                    vert_start = h * stride                    vert_end = vert_start + f                    horiz_start = w * stride                    horiz_end = horiz_start + f                                        # Use the corners to define the slice from a_prev_pad                    a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]                    # Update gradients for the window and the filter's parameters using the code formulas given above             # 根据上述公式更新梯度张量                     da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:, :, :, c] * dZ[i, h, w, c]                    dW[:,:,:,c] += a_slice * dZ[i, h, w, c]                    db[:,:,:,c] += dZ[i, h, w, c]                            # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])      # 去除用0填充的边框         dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad]    ### END CODE HERE ###        # Making sure your output shape is correct    assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))        return dA_prev, dW, db
np.random.seed(1)dA, dW, db = conv_backward(Z, cache_conv)print("dA_mean =", np.mean(dA))print("dW_mean =", np.mean(dW))print("db_mean =", np.mean(db))
Result: dA_mean = 1.45243777754dW_mean = 1.72699145831db_mean = 7.83923256462

5.2 - Pooling layer - backward pass

5.2.1 - Max pooling - backward pass

  $create_mask_from_window()$方法用来取出窗口中最大值的位置,如下:

$$ X = \begin{bmatrix}

1 && 3 \\
4 && 2
\end{bmatrix} \quad \rightarrow  \quad M =\begin{bmatrix}
0 && 0 \\
1 && 0
\end{bmatrix}\tag{4}$$

def create_mask_from_window(x):    """    Creates a mask from an input matrix x, to identify the max entry of x.        Arguments:    x -- Array of shape (f, f)        Returns:    mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.    """        ### START CODE HERE ### (≈1 line)    mask = (x == np.max(x)) # x中等于最大值的位置为True,即为1,其余位置为False,即为0    ### END CODE HERE ###        return mask
np.random.seed(1)x = np.random.randn(2,3)mask = create_mask_from_window(x)print('x = ', x)print("mask = ", mask)
Result:x =  [[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862  0.86540763 -2.3015387 ]]mask =  [[ True False False] [False False False]]
5.2.2 - Average pooling - backward pass

  在average pooling中,每一个输入窗口的元素同等地影响着输出,所以对于已知$dZ$,需将其平均分给每一个元素,如下:

$$ dZ = 1 \quad \rightarrow  \quad dZ =\begin{bmatrix}

1/4 && 1/4 \\
1/4 && 1/4
\end{bmatrix}\tag{5}$$

def distribute_value(dz, shape):    """    Distributes the input value in the matrix of dimension shape        Arguments:    dz -- input scalar    shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz        Returns:    a -- Array of size (n_H, n_W) for which we distributed the value of dz    """        ### START CODE HERE ###    # Retrieve dimensions from shape (≈1 line)    (n_H, n_W) = shape # 求出规模大小        # Compute the value to distribute on the matrix (≈1 line)    average = dz / (n_H * n_W) # 根据求出的规模大小求出平均值        # Create a matrix where every entry is the "average" value (≈1 line)    a = np.zeros(shape) + average # 生成矩阵,其元素值为dZ平均到每一个元素    ### END CODE HERE ###        return a
a = distribute_value(2, (2,2))print('distributed value =', a)
Result:distributed value = [[ 0.5  0.5] [ 0.5  0.5]]
5.2.3 - Putting it together: Pooling backward

  实现池化反向传播方法$pool_backward$,使其通过$if/elif$支持选择$max$或者$average$模式,如果为$average$模式,则调用$distribute_value()$;如果是$max$模式,则调用$create_mask_from_window()$,然后让其结果乘上dZ。

def pool_backward(dA, cache, mode = "max"):    """    Implements the backward pass of the pooling layer        Arguments:    dA -- gradient of cost with respect to the output of the pooling layer, same shape as A    cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters     mode -- the pooling mode you would like to use, defined as a string ("max" or "average")        Returns:    dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev    """        ### START CODE HERE ###        # Retrieve information from cache (≈1 line)    # 通过缓存获取输入张量以及参数字典    (A_prev, hparameters) = cache        # Retrieve hyperparameters from "hparameters" (≈2 lines)     # 解析参数字典获得stride以及f参数    stride = hparameters["stride"]    f = hparameters["f"]        # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)    # 获得输入张量规模    m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape    # 获得dA规模    m, n_H, n_W, n_C = dA.shape        # Initialize dA_prev with zeros (≈1 line)     # 根据输入张量初始化其对应梯度张量规模(全0)    dA_prev = np.zeros(A_prev.shape)       # 对于每一个输入张量    for i in range(m):                       # loop over the training examples                # select training example from A_prev (≈1 line)        a_prev = A_prev[i] # 取出每一个输入张量              # 遍历每一个dA高的每一行        for h in range(n_H):                   # loop on the vertical axis            # 遍历每一行的宽的每一列            for w in range(n_W):               # loop on the horizontal axis            # 遍历每一列的每一个通道                for c in range(n_C):           # loop over the channels (depth)                                        # Find the corners of the current "slice" (≈4 lines)             # 逆向定位影响当前dA位置的输入张量子张量                    vert_start = h * stride                    vert_end = vert_start + f                    horiz_start = w * stride                    horiz_end = horiz_start + f                                        # Compute the backward propagation in both modes.                    if mode == "max": # 对于MAX-POOL模式                                                # Use the corners and "c" to define the current slice from a_prev (≈1 line)                        a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]                        # Create the mask from a_prev_slice (≈1 line)                        mask = create_mask_from_window(a_prev_slice)                        # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)                        dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i, h, w, c]                                            elif mode == "average": # 如果是AVG-POOL模式                                                # Get the value a from dA (≈1 line)                        da = dA[i, h, w, c]                        # Define the shape of the filter as fxf (≈1 line)                        shape = (f, f)                        # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)                        dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da,shape)                            ### END CODE ###        # Making sure your output shape is correct    assert(dA_prev.shape == A_prev.shape)        return dA_prev
np.random.seed(1)A_prev = np.random.randn(5, 5, 3, 2)hparameters = {
"stride" : 1, "f": 2}A, cache = pool_forward(A_prev, hparameters)dA = np.random.randn(5, 4, 2, 2)dA_prev = pool_backward(dA, cache, mode = "max")print("mode = max")print('mean of dA = ', np.mean(dA))print('dA_prev[1,1] = ', dA_prev[1,1]) print()dA_prev = pool_backward(dA, cache, mode = "average")print("mode = average")print('mean of dA = ', np.mean(dA))print('dA_prev[1,1] = ', dA_prev[1,1])
Result:mode = maxmean of dA =  0.145713902729dA_prev[1,1] =  [[ 0.          0.        ] [ 5.05844394 -1.68282702] [ 0.          0.        ]]mode = averagemean of dA =  0.145713902729dA_prev[1,1] =  [[ 0.08485462  0.2787552 ] [ 1.26461098 -0.25749373] [ 1.17975636 -0.53624893]]

 6 - References

转载于:https://www.cnblogs.com/CZiFan/p/9476399.html

你可能感兴趣的文章
linux shell
查看>>
数据库连接及操作实例
查看>>
【Java】jdk8 Optional 的正确姿势
查看>>
云栖科技评论第56期:莫忧AI泡沫 相信AI兴邦
查看>>
超级有用的15个mysqlbinlog命令
查看>>
数据库之间转移数据
查看>>
PHP连接Mysql常用API(mysql,mysqli,pdo)区别与联系
查看>>
java中的CAS
查看>>
简单的markdown在线解析服务
查看>>
Linux基础(day44)
查看>>
Git 分支创建及使用
查看>>
MariaDB安装, Apache安装
查看>>
多线程三分钟就可以入个门了!
查看>>
从道法术三个层面理解区块链:术
查看>>
elasticsearch入门使用
查看>>
数据结构与算法4
查看>>
tomcat去掉项目名称
查看>>
微服务架构的优势与不足(一)
查看>>
分布式服务治理框架Dubbo
查看>>
小程序好的ui框架选择
查看>>