Configure iscsi initiator (client) on window and centos7

Connect ISCSI initiator with window

Go to start menu search iscsi initiator  => open it
+ Check iscsi name it match with iscsi server =>configuration tab:


then go back to Target Tab: => type ip-address of iscsi server =>click quick connect.
if no error click done.

Configure ISCSI initiator with Centos 7
The initiator will need the iscsi-initiator-utils package to be installed prior to connecting, install it first as shown below.
yum install iscsi-initiator-utils -y
We will be connecting with the initiator name specified in the /etc/iscsi/initiatorname.iscsi file, if you modify this you will also need to update the ACL on the iSCSI target as it needs to be the same on both sides.
Next we can perform a discovery against the IP address of the target server to see what iSCSI targets are on offer. In this instance 192.168.1.200 is our iSCSI target server.

[root@client ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.200
192.168.1.200:3260,1 iqn.2016-01.com.example:target
From the client system we can see that the available target, next we want to log into it in order to use it.
[root@client ~]# iscsiadm -m node -T iqn.2016-01.com.example:target -l
Logging in to [iface: default, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260] (multiple)
Login to [iface: default, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260] successful.
From the client we can view all active iSCSI sessions as shown below.
[root@client mnt]# iscsiadm -m session -P 0
tcp: [1] 192.168.1.200:3260,1 iqn.2016-01.com.example:target (non-flash)
We can also change -P 0 to 1,2 or 3 for increasing levels of information.

The fileio and block disks shared from the iSCSI target are now available to the iSCSI initiator, as shown below. In this case local disk /dev/sdb is our fileio file over on the target server in /tmp/fileio, while local disk /dev/sdc is the block disk /dev/sdc over on the target server.
[root@client ~]# lsblk --scsi
NAME HCTL       TYPE VENDOR   MODEL             REV  TRAN
sda  2:0:0:0    disk VMware,  VMware Virtual S 1.0   spi
sdb  3:0:0:0    disk LIO-ORG  testfile         4.0   iscsi
sdc  3:0:0:1    disk LIO-ORG  block            4.0   iscsi
sr0  1:0:0:0    rom  NECVMWar VMware IDE CDR10 1.00  ata
Both of these disks are now usable as if they were normal locally attached disks to the client system.
[root@client ~]# fdisk -l
Disk /dev/sdb: 524 MB, 524288000 bytes, 409600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 8388608 bytes

Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
We can partition or put a file system onto them as if they were local disks.
[root@client ~]# mkfs.etx4 /dev/sdb
meta-data=/dev/sdb               isize=256    agcount=4, agsize=12800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=51200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=853, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

From there we can mount them anywhere as required, here we mount to /mnt for testing and see that it’s available for use.
[root@client ~]# mount /dev/sdb /mnt

[root@client ~]# mount /dev/sdc /mnt2

[root@client ~]# df -h | grep mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        497M   11M  486M   6% /mnt
We could then add these into /etc/fstab to mount them automatically during system boot.
To log out of the iSCSI target, first unmount the disks.
[root@client /]# umount /mnt
Then perform the actual log out, after this we confirm there are no active sessions.
[root@client ~]# iscsiadm -m node -u
Logging out of session [sid: 1, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260]
Logout of [sid: 1, target: iqn.2016-01.com.example:target, portal: 192.168.1.200,3260] successful.

[root@client ~]# iscsiadm -m session -P 0
iscsiadm: No active sessions.
At this point if we rebooted the client system, it will automatically log back in to the iSCSI target, so if you did set up auto mounting via /etc/fstab it should mount properly. If we then reboot the iSCSI target server, it should automatically start up the target service, making the iSCSI target available on system boot.