【金融风控系列】_[2]_欺诈识别


本文围绕IEEE-CIS欺诈检测赛题展开,目标是识别欺诈交易。介绍了训练集和测试集数据情况,含交易和身份数据字段。阐述了关键策略,如构建用户唯一标识、聚合特征等,还涉及特征选择、编码、验证策略及模型训练,最终线上评分为0.959221,旨在学习特征构建。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

IEEE-CIS 欺诈检测

该赛题来自 KAGGLE,仅用作学习交流

该赛题的主要目标是识别出每笔交易是否是欺诈的。

其中训练集样本约59万(欺诈占3.5%),测试集样本约50万。

数据主要分为2类,交易数据transaction和identity数据。

本文主要是对与参考文献的收集整理


字段表

交易表

Field Description
TransactionDT 来自给定参考日期时间的时间增量(不是实际时间戳)
TransactionAMT 以美元为单位的交易支付金额
ProductCD:产品代码,每笔交易的产品
card1 - card6 支付卡信息,如卡类型、卡类别、发卡行、国家等
addr 地址
dist 距离
P_ 和 (R__) emaildomain 购买者和收件人的电子邮件域
C1-C14 计数,如发现有多少地址与支付卡关联等,实
D1-D15 timedelta,例如上次交易之间的天数等
M1-M9 匹配,如卡上的姓名和地址等
Vxxx Vesta 设计了丰富的功能,包括排名、计数和其他实体关系

分类特征:

  • ProductCD
  • card1 - card6
  • addr1, addr2
  • P_emaildomain
  • R_emaildomain
  • M1 - M9

身份表

该表中的变量是身份信息——与交易相关的网络连接信息(IP、ISP、代理等)和数字签名(UA/浏览器/操作系统/版本等)。

它们由 Vesta 的欺诈保护系统和数字安全合作伙伴收集。

(字段名称被屏蔽,不提供成对字典用于隐私保护和合同协议)


分类特征:

  • DeviceType
  • DeviceInfo
  • id_12 - id_38

参考:

[1] https://zhuanlan.zhihu.com/p/85947569

[2] https://www.kaggle.com/c/ieee-fraud-detection/discussion/111284

[3] https://www.kaggle.com/c/ieee-fraud-detection/discussion/111308

[4] https://www.kaggle.com/c/ieee-fraud-detection/discussion/101203

主要策略

  • 构建用户的唯一标识(十分重要)
  • 使用UID构建聚合特征
  • 类别特征的编码(主要是用频率编码和label encode)
  • 水平方向:模型融合;垂直方向:针对用户的后处理

欺诈行为定义

标记的逻辑是将卡上报告的退款定义为欺诈交易 (isFraud=1),并将其后的用户帐户、电子邮件地址或账单地址直接关联到这些属性的交易也定义为欺诈。如果以上均未在120天内出现,则我们定义该笔定义为合法交易(isFraud=0)。

你可能认为 120 天后,一张卡片就变成了isFraud=0。我们很少在训练数据中看到这一点。(也许欺诈性信用卡会被终止使用)。训练数据集有 73838 个客户(信用卡)有2 个或更多交易。其中,71575 (96.9%) 始终为isFraud=0,2134 (2.9%) 始终为isFraud=1。只有129(0.2%)具有的混合物isFraud=0和isFraud=1。
       

从中,我们可以获得在业务中欺诈的逻辑,一个用户有过欺诈经历,那么他下次欺诈的概率还是非常高的,我们需要关注到这一点。


唯一客户标识

原始数据中未包含唯一UID,因此需要对客户进行唯一标识,识别客户的关键是三列card1,addr1和D1

D1 列是“自客户(信用卡)开始以来的天数”

card1 列是“银行卡的前多少位”

addr1 列是“用户地址代码”

确定了用户的唯一标识之后,我们并不能直接把它当作一个特征直接加入到模型中去,因为通过分析发现,测试集中有68.2%的用户是新用户,并不在训练集中。我们需要间接的使用`UID`,用`UID`构造一些聚合特征。
       

特征选择

  • 前向特征选择(使用单个或一组特征)
  • 递归特征消除(使用单个或一组特征)
  • 排列重要性
  • 对抗验证
  • 相关分析
  • 时间一致性
  • 客户一致性
  • 训练/测试分布分析

一个叫做“时间一致性”的有趣技巧是在训练数据集的第一个月使用单个特征(或一小组特征)训练单个模型,并预测isFraud最后一个月的训练数据集。这会评估特征本身是否随时间保持一致。95% 是,但我们发现 5% 的列不符合我们的模型。他们的训练 AUC 约为 0.60,验证 AUC 为 0.40。


验证策略

  • 训练两个月/ 跳过两个月 / 预测两个月
  • 训练四个月/ 跳过一个月 / 预测一个月

特征编码

主要使用以下五种特征编码方式

频率编码 :统计该值出现的个数

def encode_FE(df1, df2, cols):    for col in cols:
        df = pd.concat([df1[col], df2[col]])
        vc = df.value_counts(dropna=True, normalize=True).to_dict()
        vc[-1] = -1
        nm = col + "FE"
        df1[nm] = df1[col].map(vc)
        df1[nm] = df1[nm].astype("float32")
        df2[nm] = df2[col].map(vc)
        df2[nm] = df2[nm].astype("float32")        print(col)
       

标签编码 :将原数据映射称为一组顺序数字,类似ONE-HOT,不过 pd.factorize 映射为[1],[2],[3]。 pd.get_dummies() 映射为 [1,0,0],[0,1,0],[0,0,1]

def encode_LE(col, train=X_train, test=X_test, verbose=True):
    df_comb = pd.concat([train[col], test[col]], axis=0)
    df_comb, _ = pd.factorize(df_comb)    nm = col
    if df_comb.max() > 32000:
        train[nm] = df_comb[0: len(train)].astype("float32")
        test[nm] = df_comb[len(train):].astype("float32")    else:
        train[nm] = df_comb[0: len(train)].astype("float16")
        test[nm] = df_comb[len(train):].astype("float16")
    del df_comb
    gc.collect()    if verbose:        print(col)
       

统计特征:主要使用 pd.groupby对变量进行分组,再使用agg计算分组的统计特征

def encode_AG(main_columns, uids, aggregations=["mean"], df_train=X_train, df_test=X_test, fillna=True, usena=False):    for main_column in main_columns:
        for col in uids:
            for agg_type in aggregations:
                new_column = main_column + "_" + col + "_" + agg_type
                temp_df = pd.concat([df_train[[col, main_column]], df_test[[col, main_column]]])                if usena:
                    temp_df.loc[temp_df[main_column] == -1, main_column] = np.nan
                #求每个uid下,该col的均值或标准差
                temp_df = temp_df.groupby([col])[main_column].agg([agg_type]).reset_index().rename(
                    columns={agg_type: new_column})
                #将uid设成index
                temp_df.index = list(temp_df[col])
                temp_df = temp_df[new_column].to_dict()
                #temp_df是一个映射字典
                df_train[new_column] = df_train[col].map(temp_df).astype("float32")
                df_test[new_column] = df_test[col].map(temp_df).astype("float32")                if fillna:
                    df_train[new_column].fillna(-1, inplace=True)
                    df_test[new_column].fillna(-1, inplace=True)                print(new_column)
       

交叉特征:对两列的特征重新组合成为新特征,再进行标签编码

def encode_CB(col1, col2, df1=X_train, df2=X_test):
    nm = col1 + '_' + col2
    df1[nm] = df1[col1].astype(str) + '_' + df1[col2].astype(str)
    df2[nm] = df2[col1].astype(str) + '_' + df2[col2].astype(str)
    encode_LE(nm, verbose=False)    print(nm, ', ', end='')
       

唯一值特征:分组后返回目标属性的唯一值个数

def encode_AG2(main_columns, uids, train_df=X_train, test_df=X_test):
    for main_column in main_columns:
        for col in uids:
            comb = pd.concat([train_df[[col] + [main_column]], test_df[[col] + [main_column]]], axis=0)
            mp = comb.groupby(col)[main_column].agg(['nunique'])['nunique'].to_dict()
            train_df[col + '_' + main_column + '_ct'] = train_df[col].map(mp).astype('float32')
            test_df[col + '_' + main_column + '_ct'] = test_df[col].map(mp).astype('float32')
            print(col + '_' + main_column + '_ct, ', end='')
   

复现代码

因为数据集命名有空格的问题,请先将文件夹/data104475下数据集手动重命名为 IEEE_CIS_Fraud_Detection.zip

In [2]
# 解压数据集 仅第一次运行时运行!unzip -q -o data/data104475/IEEE_CIS_Fraud_Detection.zip -d /home/aistudio/data
       
unzip:  cannot find or open data/data104475/IEEE_CIS_Fraud_Detection.zip, data/data104475/IEEE_CIS_Fraud_Detection.zip.zip or data/data104475/IEEE_CIS_Fraud_Detection.zip.ZIP.
        In [3]
# 安装依赖包!pip install xgboost
    In [6]
import numpy as np  # linear algebraimport pandas as pd  # data processing, CSV file I/O (e.g. pd.read_csv)import os, gcfrom sklearn.model_selection import GroupKFoldfrom sklearn.metrics import roc_auc_scoreimport xgboost as xgbimport datetime
    In [4]
path_train_transaction = "./data/raw_data/train_transaction.csv"path_train_id = "./data/raw_data/train_identity.csv"path_test_transaction = "./data/raw_data/test_transaction.csv"path_test_id = "./data/raw_data/test_identity.csv"path_sample_submission = './data/raw_data/sample_submission.csv'path_submission = 'sub_xgb_95.csv'
    In [7]
BUILD95 = FalseBUILD96 = True# cols with stringsstr_type = ['ProductCD', 'card4', 'card6', 'P_emaildomain', 'R_emaildomain', 'M1', 'M2', 'M3', 'M4', 'M5',            'M6', 'M7', 'M8', 'M9', 'id_12', 'id_15', 'id_16', 'id_23', 'id_27', 'id_28', 'id_29', 'id_30',            'id_31', 'id_33', 'id_34', 'id_35', 'id_36', 'id_37', 'id_38', 'DeviceType', 'DeviceInfo']# fisrt 53 columnscols = ['TransactionID', 'TransactionDT', 'TransactionAmt',        'ProductCD', 'card1', 'card2', 'card3', 'card4', 'card5', 'card6',        'addr1', 'addr2', 'dist1', 'dist2', 'P_emaildomain', 'R_emaildomain',        'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11',        'C12', 'C13', 'C14', 'D1', 'D2', 'D3', 'D4', 'D5', 'D6', 'D7', 'D8',        'D9', 'D10', 'D11', 'D12', 'D13', 'D14', 'D15', 'M1', 'M2', 'M3', 'M4',        'M5', 'M6', 'M7', 'M8', 'M9']# V COLUMNS TO LOAD DECIDED BY CORRELATION EDA# https://www.kaggle.com/cdeotte/eda-for-columns-v-and-idv = [1, 3, 4, 6, 8, 11]
v += [13, 14, 17, 20, 23, 26, 27, 30]
v += [36, 37, 40, 41, 44, 47, 48]
v += [54, 56, 59, 62, 65, 67, 68, 70]
v += [76, 78, 80, 82, 86, 88, 89, 91]# v += [96, 98, 99, 104] #relates to groups, no NANv += [107, 108, 111, 115, 117, 120, 121, 123]  # maybe group, no NANv += [124124, 127, 129, 130, 136]  # relates to groups, no NAN# LOTS OF NAN BELOWv += [138, 139, 142, 147, 156, 162]  # b1v += [165, 160, 166]  # b1v += [178, 176, 173, 182]  # b2v += [187, 203, 205, 207, 215]  # b2v += [169, 171, 175, 180, 185, 188, 198, 210, 209]  # b2v += [218, 223, 224, 226, 228, 229, 235]  # b3v += [240, 258, 257, 253, 252, 260, 261]  # b3v += [264, 266, 267, 274, 277]  # b3v += [220, 221, 234, 238, 250, 271]  # b3v += [294, 284, 285, 286, 291, 297]  # relates to grous, no NANv += [303, 305, 307, 309, 310, 320]  # relates to groups, no NANv += [281, 283, 289, 296, 301, 314]  # relates to groups, no NAN# v += [332, 325, 335, 338] # b4 lots NANcols += ['V' + str(x) for x in v]
dtypes = {}for c in cols + ['id_0' + str(x) for x in range(1, 10)] + ['id_' + str(x) for x in range(10, 34)]:
    dtypes[c] = 'float32'for c in str_type:
    dtypes[c] = 'category'# load data and mergeprint("load data...")
X_train = pd.read_csv(path_train_transaction, index_col="TransactionID", dtype=dtypes, usecols=cols + ["isFraud"])
train_id = pd.read_csv(path_train_id, index_col="TransactionID", dtype=dtypes)
X_train = X_train.merge(train_id, how="left", left_index=True, right_index=True)

X_test = pd.read_csv(path_test_transaction, index_col="TransactionID", dtype=dtypes, usecols=cols)
test_id = pd.read_csv(path_test_id, index_col="TransactionID", dtype=dtypes)
X_test = X_test.merge(test_id, how="left", left_index=True, right_index=True)# targety_train = X_train["isFraud"]del train_id, test_id, X_train["isFraud"]print("X_train shape:{}, X_test shape:{}".format(X_train.shape, X_test.shape))
       
load data...
X_train shape:(590540, 213), X_test shape:(506691, 213)
        In [21]
# transform D feature "time delta" as "time point"for i in range(1, 16):    if i in [1, 2, 3, 5, 9]:        continue
    X_train["D" + str(i)] = X_train["D" + str(i)] - X_train["TransactionDT"] / np.float32(60 * 60 * 24)
    X_test["D" + str(i)] = X_test["D" + str(i)] - X_test["TransactionDT"] / np.float32(60 * 60 * 24)# encoding function# frequency encodedef encode_FE(df1, df2, cols):
    for col in cols:
        df = pd.concat([df1[col], df2[col]])
        vc = df.value_counts(dropna=True, normalize=True).to_dict()
        vc[-1] = -1
        nm = col + "FE"
        df1[nm] = df1[col].map(vc)
        df1[nm] = df1[nm].astype("float32")
        df2[nm] = df2[col].map(vc)
        df2[nm] = df2[nm].astype("float32")        print(col)# label encodedef encode_LE(col, train=X_train, test=X_test, verbose=True):
    df_comb = pd.concat([train[col], test[col]], axis=0)
    df_comb, _ = pd.factorize(df_comb)
    nm = col    if df_comb.max() > 32000:
        train[nm] = df_comb[0: len(train)].astype("float32")
        test[nm] = df_comb[len(train):].astype("float32")    else:
        train[nm] = df_comb[0: len(train)].astype("float16")
        test[nm] = df_comb[len(train):].astype("float16")    del df_comb
    gc.collect()    if verbose:        print(col)def encode_AG(main_columns, uids, aggregations=["mean"], df_train=X_train, df_test=X_test, fillna=True, usena=False):
    for main_column in main_columns:        for col in uids:            for agg_type in aggregations:
                new_column = main_column + "_" + col + "_" + agg_type
                temp_df = pd.concat([df_train[[col, main_column]], df_test[[col, main_column]]])                if usena:
                    temp_df.loc[temp_df[main_column] == -1, main_column] = np.nan                #求每个uid下,该col的均值或标准差
                temp_df = temp_df.groupby([col])[main_column].agg([agg_type]).reset_index().rename(
                    columns={agg_type: new_column})                #将uid设成index
                temp_df.index = list(temp_df[col])
                temp_df = temp_df[new_column].to_dict()                #temp_df是一个映射字典
                df_train[new_column] = df_train[col].map(temp_df).astype("float32")
                df_test[new_column] = df_test[col].map(temp_df).astype("float32")                if fillna:
                    df_train[new_column].fillna(-1, inplace=True)
                    df_test[new_column].fillna(-1, inplace=True)                print(new_column)# COMBINE FEATURES交叉特征def encode_CB(col1, col2, df1=X_train, df2=X_test):
    nm = col1 + '_' + col2
    df1[nm] = df1[col1].astype(str) + '_' + df1[col2].astype(str)
    df2[nm] = df2[col1].astype(str) + '_' + df2[col2].astype(str)
    encode_LE(nm, verbose=False)    print(nm, ', ', end='')# GROUP AGGREGATION NUNIQUEdef encode_AG2(main_columns, uids, train_df=X_train, test_df=X_test):
    for main_column in main_columns:        for col in uids:
            comb = pd.concat([train_df[[col] + [main_column]], test_df[[col] + [main_column]]], axis=0)
            mp = comb.groupby(col)[main_column].agg(['nunique'])['nunique'].to_dict()
            train_df[col + '_' + main_column + '_ct'] = train_df[col].map(mp).astype('float32')
            test_df[col + '_' + main_column + '_ct'] = test_df[col].map(mp).astype('float32')            print(col + '_' + main_column + '_ct, ', end='')print("encode cols...")# TRANSACTION AMT CENTSX_train['cents'] = (X_train['TransactionAmt'] - np.floor(X_train['TransactionAmt'])).astype('float32')
X_test['cents'] = (X_test['TransactionAmt'] - np.floor(X_test['TransactionAmt'])).astype('float32')print('cents, ', end='')
       
encode cols...
cents,
        In [19]
# FREQUENCY ENCODE: ADDR1, CARD1, CARD2, CARD3, P_EMAILDOMAINencode_FE(X_train, X_test, ['addr1', 'card1', 'card2', 'card3', 'P_emaildomain'])# COMBINE COLUMNS CARD1+ADDR1, CARD1+ADDR1+P_EMAILDOMAINencode_CB('card1', 'addr1')
encode_CB('card1_addr1', 'P_emaildomain')# FREQUENCY ENOCDEencode_FE(X_train, X_test, ['card1_addr1', 'card1_addr1_P_emaildomain'])# GROUP AGGREGATEencode_AG(['TransactionAmt', 'D9', 'D11'], ['card1', 'card1_addr1', 'card1_addr1_P_emaildomain'], ['mean', 'std'],
          usena=False)for col in str_type:
    encode_LE(col, X_train, X_test)"""
Feature Selection - Time Consistency
We added 28 new feature above. We have already removed 219 V Columns from correlation analysis done here. 
So we currently have 242 features now. We will now check each of our 242 for "time consistency". 
We will build 242 models. Each model will be trained on the first month of the training data and will only use one feature. 
We will then predict the last month of the training data. We want both training AUC and validation AUC to be above AUC = 0.5.
 It turns out that 19 features fail this test so we will remove them. 
 Additionally we will remove 7 D columns that are mostly NAN. More techniques for feature selection are listed here
"""cols = list(X_train.columns)
cols.remove('TransactionDT')for c in ['D6', 'D7', 'D8', 'D9', 'D12', 'D13', 'D14']:
    cols.remove(c)# FAILED TIME CONSISTENCY TESTfor c in ['C3', 'M5', 'id_08', 'id_33']:
    cols.remove(c)for c in ['card4', 'id_07', 'id_14', 'id_21', 'id_30', 'id_32', 'id_34']:
    cols.remove(c)for c in ['id_' + str(x) for x in range(22, 28)]:
    cols.remove(c)print('NOW USING THE FOLLOWING', len(cols), 'FEATURES.')# CHRIS - TRAIN 75% PREDICT 25%idxT = X_train.index[:3 * len(X_train) // 4]
idxV = X_train.index[3 * len(X_train) // 4:]print(X_train.info())# X_train = X_train.convert_objects(convert_numeric=True)# X_test = X_test.convert_objects(convert_numeric=True)for col in str_type:    print(col)
    X_train[col] = X_train[col].astype(int)
    X_test[col] = X_test[col].astype(int)print("after transform:")print(X_train.info())# fillnafor col in cols:
    X_train[col].fillna(-1, inplace=True)
    X_test[col].fillna(-1, inplace=True)
    In [22]
START_DATE = datetime.datetime.strptime('2017-11-30', '%Y-%m-%d')
X_train['DT_M'] = X_train['TransactionDT'].apply(lambda x: (START_DATE + datetime.timedelta(seconds=x)))
X_train['DT_M'] = (X_train['DT_M'].dt.year - 2017) * 12 + X_train['DT_M'].dt.month

X_test['DT_M'] = X_test['TransactionDT'].apply(lambda x: (START_DATE + datetime.timedelta(seconds=x)))
X_test['DT_M'] = (X_test['DT_M'].dt.year - 2017) * 12 + X_test['DT_M'].dt.monthprint("training...")if BUILD95:
    oof = np.zeros(len(X_train))
    preds = np.zeros(len(X_test))

    skf = GroupKFold(n_splits=6)    for i, (idxT, idxV) in enumerate(skf.split(X_train, y_train, groups=X_train['DT_M'])):
        month = X_train.iloc[idxV]['DT_M'].iloc[0]        print('Fold', i, 'withholding month', month)        print(' rows of train =', len(idxT), 'rows of holdout =', len(idxV))
        clf = xgb.XGBClassifier(
            n_estimators=5000,
            max_depth=12,
            learning_rate=0.02,
            subsample=0.8,
            colsample_bytree=0.4,
            missing=-1,
            eval_metric='auc',            # USE CPU
            # nthread=4,
            # tree_method='hist'
            # USE GPU
            tree_method='gpu_hist'
        )
        h = clf.fit(X_train[cols].iloc[idxT], y_train.iloc[idxT],
                    eval_set=[(X_train[cols].iloc[idxV], y_train.iloc[idxV])],
                    verbose=100, early_stopping_rounds=200)

        oof[idxV] += clf.predict_proba(X_train[cols].iloc[idxV])[:, 1]
        preds += clf.predict_proba(X_test[cols])[:, 1] / skf.n_splits        del h, clf
        x = gc.collect()    print('#' * 20)    print('XGB95 OOF CV=', roc_auc_score(y_train, oof))if BUILD95:
    sample_submission = pd.read_csv(path_sample_submission)
    sample_submission.isFraud = preds
    sample_submission.to_csv(path_submission, index=False)

X_train['day'] = X_train.TransactionDT / (24 * 60 * 60)
X_train['uid'] = X_train.card1_addr1.astype(str) + '_' + np.floor(X_train.day - X_train.D1).astype(str)

X_test['day'] = X_test.TransactionDT / (24 * 60 * 60)
X_test['uid'] = X_test.card1_addr1.astype(str) + '_' + np.floor(X_test.day - X_test.D1).astype(str)# FREQUENCY ENCODE UIDencode_FE(X_train, X_test, ['uid'])# AGGREGATEencode_AG(['TransactionAmt', 'D4', 'D9', 'D10', 'D15'], ['uid'], ['mean', 'std'], fillna=True, usena=True)# AGGREGATEencode_AG(['C' + str(x) for x in range(1, 15) if x != 3], ['uid'], ['mean'], X_train, X_test, fillna=True, usena=True)# AGGREGATEencode_AG(['M' + str(x) for x in range(1, 10)], ['uid'], ['mean'], fillna=True, usena=True)# AGGREGATEencode_AG2(['P_emaildomain', 'dist1', 'DT_M', 'id_02', 'cents'], ['uid'], train_df=X_train, test_df=X_test)# AGGREGATEencode_AG(['C14'], ['uid'], ['std'], X_train, X_test, fillna=True, usena=True)# AGGREGATEencode_AG2(['C13', 'V314'], ['uid'], train_df=X_train, test_df=X_test)# AGGREATEencode_AG2(['V127', 'V136', 'V309', 'V307', 'V320'], ['uid'], train_df=X_train, test_df=X_test)# NEW FEATUREX_train['outsider15'] = (np.abs(X_train.D1 - X_train.D15) > 3).astype('int8')
X_test['outsider15'] = (np.abs(X_test.D1 - X_test.D15) > 3).astype('int8')print('outsider15')

cols = list(X_train.columns)
cols.remove('TransactionDT')for c in ['D6', 'D7', 'D8', 'D9', 'D12', 'D13', 'D14']:    if c in cols:
        cols.remove(c)for c in ['oof', 'DT_M', 'day', 'uid']:    if c in cols:
        cols.remove(c)# FAILED TIME CONSISTENCY TESTfor c in ['C3', 'M5', 'id_08', 'id_33']:    if c in cols:
        cols.remove(c)for c in ['card4', 'id_07', 'id_14', 'id_21', 'id_30', 'id_32', 'id_34']:    if c in cols:
        cols.remove(c)for c in ['id_' + str(x) for x in range(22, 28)]:    if c in cols:
        cols.remove(c)print('NOW USING THE FOLLOWING', len(cols), 'FEATURES.')print(np.array(cols))if BUILD96:

    oof = np.zeros(len(X_train))
    preds = np.zeros(len(X_test))

    skf = GroupKFold(n_splits=6)    for i, (idxT, idxV) in enumerate(skf.split(X_train, y_train, groups=X_train['DT_M'])):
        month = X_train.iloc[idxV]['DT_M'].iloc[0]        print('Fold', i, 'withholding month', month)        print(' rows of train =', len(idxT), 'rows of holdout =', len(idxV))
        clf = xgb.XGBClassifier(
            n_estimators=5000,
            max_depth=12,
            learning_rate=0.02,
            subsample=0.8,
            colsample_bytree=0.4,
            missing=-1,
            eval_metric='auc',            # USE CPU
            # nthread=4,
            # tree_method='hist'
            # USE GPU
            tree_method='gpu_hist'
        )
        h = clf.fit(X_train[cols].iloc[idxT], y_train.iloc[idxT],
                    eval_set=[(X_train[cols].iloc[idxV], y_train.iloc[idxV])],
                    verbose=100, early_stopping_rounds=200)

        oof[idxV] += clf.predict_proba(X_train[cols].iloc[idxV])[:, 1]
        preds += clf.predict_proba(X_test[cols])[:, 1] / skf.n_splits        del h, clf
        x = gc.collect()    print('#' * 20)    print('XGB96 OOF CV=', roc_auc_score(y_train, oof))if BUILD96:
    sample_submission = pd.read_csv(path_sample_submission)
    sample_submission.isFraud = preds
    sample_submission.to_csv(path_submission, index=False)
   

总结

  • 本项目主要对IEEE-CIS Fraud Detection相关资料进行了收集汇总,目的是学习特征的构建。

数据的提交结果如下:(提交需要科学上网)

数据集 IEEE-CIS Fraud Detection
线上评分 0.959221


# 一个月  # 均值  # 每笔  # 主要是  # 卡上  # 跳过  # 线上  # 是一个  # 两个月  # 操作系统  # https  # 递归  # red  # 排列  # 退款  # ai  # 浏览器 


相关栏目: 【 Google疑问12 】 【 Facebook疑问10 】 【 网络优化91478 】 【 技术知识72672 】 【 云计算0 】 【 GEO优化84317 】 【 优选文章0 】 【 营销推广36048 】 【 网络运营41350 】 【 案例网站102563 】 【 AI智能45237


相关推荐: 人脸识别的伦理困境:Massive Attack的演出引发的思考  百度ai助手工具栏怎么关 百度ai助手状态栏隐藏  Gemini 辅助进行博物馆数字化藏品分类建议  Google AI Studio 中的提示词微调实验教程  tofai官网入口链接 tofai网页版在线登录  百度AI搜索怎样设置搜索偏好_百度AI搜索偏好设置与个性化推荐【技巧】  tofai入口官方网站 tofai网页版入口地址  歌曲分析:The Killers乐队的《Mr. Brightside》歌词深度解析  即梦ai怎么生成游戏角色原画_即梦ai游戏角色生成风格与装备细节【教程】  冷邮件营销新策略:工作坊模式助力B2B销售增长  10平米房间设计终极挑战:人类 vs AI,DIY极简主义胜出!  QuickBooks Desktop 到 Online 迁移指南:轻松转移您的公司数据  教你用AI进行市场调研,快速生成消费者洞察报告  AI伴侣:连接还是孤独?真实对话揭秘AI伦理困境  Weavernote:AI驱动的知识管理与高效笔记应用  随机故事生成器:激发创意,轻松创作精彩故事  VoiceBrigade:AI 赋能,革新语音合成与内容创作  教你用AI一键去除图片水印,操作简单效果惊人  Excel Copilot:AI驱动的强大新功能与实用案例解析  AI时代生存指南:掌握软实力,成为不可替代的人  AI威胁论:超人工智能ASI时代来临,人类如何应对?  利用AI自动化回复Google Voice短信:终极指南  豆包AI怎么关闭消息推送_通知与提醒管理设置教程  优化《现代战争2》色彩:提升游戏视觉体验终极指南  文本分类:生成模型与朴素贝叶斯算法的全面指南  Veribix Demo Analytics: 优化呼叫录音分析,提升客服效率  kimi生成ppt怎么编辑文字_kimi编辑文字后怎么保存  AI客服工具:24/7全天候支持业务增长的秘密武器  AGI未来展望:DeepMind CEO的深度解读与行业洞察  千问怎么使用插件功能_千问插件调用与功能扩展【教程】  kimi如何导出对话_导出对话内容方法【攻略】  唐库AI拆书工具如何提取核心观点_唐库AI拆书工具观点提取与标注方法【攻略】  lovemo手机网页版 lovemo官方入口地址  Semrush Summary Generator: 高效总结长篇文章的终极指南  去哪旅行ai抢票助手怎样添加备选车次_去哪旅行ai抢票助手备选车次设置与切换【攻略】  千问如何切换回答风格_千问风格选择正式口语等【实操】  Midjourney怎样生成网页图标_Midjourney图标生成教程【方法】  AI动画制作教程:Adobe Express一键语音转动画  v0 Report深度测评:AI文档生成器的优缺点分析与实用指南  AI代码助手的崛起:软件工程的未来展望与实用指南  DeepSeek AI:AI通用谜题解题器,解题思路全解析  DeepSeek 辅助进行 Linux 内核参数调优教程  雷小兔ai智能写作如何生成文案_雷小兔ai智能写作文案生成场景选择【攻略】  百度输入法智能预测怎么关 百度输入法ai联想词关闭  Claude如何关闭自动续费_Claude续费关闭方法【方法】  tofai免费网页版入口 tofai官网手机版网站  AI聊天机器人:朋友还是谄媚者?深度解析与实用建议  AI赋能招聘:高级策略助你领先猎头行业  夸克AI怎样搜索医疗健康_夸克AI医疗频道与症状自查【技巧】  扣子AI怎样设置多轮对话逻辑_扣子AI逻辑树搭建与分支设计【教程】 

 2025-07-22

了解您产品搜索量及市场趋势,制定营销计划

同行竞争及网站分析保障您的广告效果

点击免费数据支持

提交您的需求,1小时内享受我们的专业解答。

南京市珐之弘网络技术有限公司


南京市珐之弘网络技术有限公司

南京市珐之弘网络技术有限公司专注海外推广十年,是谷歌推广.Facebook广告全球合作伙伴,我们精英化的技术团队为企业提供谷歌海外推广+外贸网站建设+网站维护运营+Google SEO优化+社交营销为您提供一站式海外营销服务。

 87067657

 13565296790

 87067657@qq.com

Notice

We and selected third parties use cookies or similar technologies for technical purposes and, with your consent, for other purposes as specified in the cookie policy.
You can consent to the use of such technologies by closing this notice, by interacting with any link or button outside of this notice or by continuing to browse otherwise.