• synology.cedric.datacenter

    I did win a Synology DS710+ & DX510 by filling out a survey, after having read an official tweet of Synology Inc.( about the contest 3 weeks ago. The funny part is that I bought a Synology DS1010+ (704CHF/692$) and 4TB of disk (280CHF/275$) at the same time…

    On the picture above, this is my current small datacenter :-)

    • DS 408 has four disks of  1TB in RAID5, so a volume of 2.7TB since 2008
    • DS1010+ has two disks of 2 TB in RAID1 so a volume of 1.8TB since 2008
    • I have others disks 3.5TB not represented in my custom NAS that will soon be retired…

    The DS 510 will perfectly complete my existing DS1010+, the DS710+ will be assigned to some other tasks. Synology is really developing high-performance, reliable, versatile, and environmentally-friendly Network Attached Storage products. I did never had any problems, and all software updates were working as expected. They are by far the best NAS on the market with a lot of software functionalities thanks to a Linux kernel.

    Synology DX510

    Expansion Unit for Increasing Capacity of the Synology DiskStation DS710+ or DS1010+. When the storage capacity of the Synology DS710+/DS1010+ is nearing its limits, it can easily be expanded by the Synology DX510. The Synology DX510 securely connects to the Synology DS710+/DS1010+ via an eSATA cable of specially-designed connectors on both ends to ensure maximum throughput. The Synology DX510 can directly expand the existing storage on the Synology DS710+ to a maximum 14 TB volume.


    OverviewPhotoSpecificationsDatasheet - Supported Models - Support

    Synology DS 710+

    synology.nas.ds710_01 synology.nas.ds710_02 synology.nas.ds710_03



    • Intel Atom CPU Frequency: 1.67GHz
    • Floating Point
    • Memory Bus: 64bit@DDR800
    • Memory: 1GB
    • Internal HDD1: 3.5" SATA(II) X2 or 2.5" SATA/SSD X2
    • Max Internal Capacity: 4TB (2x 2TB hard drives)9
      (See All Supported HDD)
    • Hot Swappable HDD
    • Size (HxWxD): 157mm X 103.5mm X 232mm
    • External HDD Interface: USB 2.0 port X3, eSATA port X1
    • Weight: 1.69kg
    • LAN: Gigabit X1
    • Wireless Support11
    • Fan: X1(80mmX80mm)
    • Wake on LAN/WAN
    • Noise Level:2 22.1dB(A)
    • Power Recovery
    • AC Input Power Voltage: 100V to 240V
    • Power Frequency: 50Hz to 60Hz, Single Phase
    • Power Consumption3: 31W(Access); 17W(HDD Hibernation)
    • Operating Temperature: 5°C to 35°C (40°F to 95°F)
    • Storage Temperature: -10°C to 70°C (15°F to 155°F)
    • Relative Humidity: 5% to 95%RH
    • Maximum Operating Altitude: 10,000 feet
    • Certification: FCC Class B, CE Class B, BSMI Class B

  • I've made many errors when building my NAS server, and
    this  force me to forget using SUN Zeta File System, at least
    for this year...In fact I have decide to build a NAS before
    even knowing the existence of ZFS, and bought following
    hardware components:
    • 1 Promise Supertrak EX8350 with 8 SATA2 3GB port (RAID6)
    • The cheapest integrated mainboard available: NFORCE4 IGP
    • AMD64 3000+
      I took me half a day to update both mainboard (in order to use the Promise EX8350 PCI e4X in the PCI e 16X port) and controller bios (support of RAID6)! The crazy process of updating BIOS and firmware with a floppy disk has still not disappear. The second issue was also to create a floppy disk on a system without any OS.

    The solution come of course from Knoppix. I was able to find old DOS floppy disk images at all DOS and Windows version are available there. I quickly boot my disk less machine Knoppix and format a new floppy:
    # fdformat /dev/fd0

    and extract the boot image by typing:
    # dd if=bootdisk.img of=/dev/fd0 bs=1440k
    This has permit me to flash the mainboard with the latest ASS bios available (1001) and the Promise controller.

    Ive contact Promise support 2 times  (Europe AND USA), the response is below:

    So if You ever want to build a NAS powered by a Solaris flavor, first consult the Hardware Compatibility List (HCL), and avoid Promise Technology. I've found that all others main manufacturer like Adaptec and ARECA provide Solaris drivers (HERE) even if it they are quite old (middle of 2005).

    Ive also tried some Solaris flavor which I can definitely recommend You, if You decide  to play with ZFS:
    Both version both seem not to use OpenSolaris Nevada build 44,  so I was not able to play with RAIDZ2 (simulate a RAID6 array)

    A replicated RAID-Z configuration can now have either single- or
    double-parity, which means that one or two device failures can be sustained
    respectively, without any data loss. You can specify the raidz2 keyword for a double-parity RAID-Z
    configuration. Or, you can specify the raidz or raidz1 keyword for a single-parity RAID-Z

    I've also tried Solaris Express 10 (Live CD) which is available also for free (non commercial use), but I was really not convince by the desktop, and hardware was not better recognized.
    What can also stop You from using ZFS is the encryption subproject which has not deliver yet, and the fact that the only supported pool share is NFS (Windows support it with "Windows Services for UNIX version" 300Mb), Samba export still being in development.

    This give me 2 options:  use either a Windows or Linux operating system.Windows has a major advantage by having all drivers support (Cool and Quiet, Nforce4 chipset, Promise driver and management console), but all insecurities and the fully fledged desktop is NOT needed on a true file server. Linux on the other side has also all drivers available (except Promise WebPam management console), and is a lot more modulable: I can remove all functionalities not needed: no FTP, no desktop, no HTTP daemon,... Samba, ssh2 and ReiserFS is all I need!

    I  may choose for the job:
    • OpenSuse 10.1 since I am using SuSE since 3 years  or
    • Free BSD, a leader in stability and securities in the Linux world.
    Right now, I've put 5 disks of 320 GB in a RAID5 logical array, the initialization of 1.2TB  took 18 hours!
    Promise Ex8350 initializing the NAS

    This box has 14 Sata Port and I've add old disk full of data 300GB and  160GB.and 8 USB port (+ 2 Maxtor 300GB USB disk).

    wattage controler checking power comsumption of NAS The power consumption is quite high not only because of all hard disks (15 Watts * 7 = 105), but also because of the AMD64 (95 Watts at 1800MHz and 63 Watts at 800MHz when Cool'nQuiet is active). The promise Intel IIOP cpu is also sucking energy. Without it into he box, total power consumption was below 100 Watts, with 150 Watts!

    In order to better tune the box for power consumption (down clocking, reduce main voltage of CPU core), I've bought a cheap Wattage controller (7 euro), left is the NAS running during init of the array without Cool and Quiet

  • origin: WikiPedia

    An open source implementation of the SMB file sharing protocol that provides file and print services to SMB/CIFS clients. Samba allows a non-Windows server to communicate with the same networking protocol as the Windows products. Samba was originally developed for Unix but can now run on Linux, FreeBSD and other Unix variants. It is freely available under the GNU General Public License. The name Samba is a variant of SMB, the protocol from which it stems. As of version 3, samba not only provides file and print services for various Microsoft Windows clients but can also integrate with a Windows Server domain, either as a Primary Domain Controller (PDC) or as a Domain Member. It can also be part of an Active Directory domain.

    Server message block (SMB) is a network protocol mainly applied to share files, printers, serial ports, and miscellaneous communications between nodes on a network. It is mainly used by Microsoft Windows equipped computers.

    The File Transfer Protocol (FTP) is a software standard for transferring computer files between machines with widely different operating systems. It belongs to the application layer of the Internet protocol suite.
    Network File System (NFS) is a protocol originally developed by Sun Microsystems in 1984 and defined in RFCs 1094, 1813, (3010) and 3530, as a file system which allows a computer to access files over a network as easily as if they were on its local disks.

    rsync is a computer program which synchronises files and directories from one location to another while minimizing data transfer using delta encoding when appropriate. An important feature of rsync not found in most similar programs/protocols is that the mirroring takes place with only one transmission in each direction.

  • in construction
  • in construction
  • Putting OpenSolaris in a NAS server

    OpenSolaris is an open source project created by Sun Microsystems to build a developer community around the Solaris Operating System technology
    OpenSolaris express is the official distribution and can be download HERE but I will use a fork of that code.  Raid @ home with opensolaris and ZFS Why Solaris for a NAS server?

    Solaris itself while being a rock solid operating system, is not really needed for a NAS server (oversized). What has increase my interest in it, is ZFS, the Zetabyte File System. This is an extract of all arguments fits nicely to my need:


    • ZFS is a new kind of filesystem that provides simple administration, transactional semantics, end-to-end data integrity, and immense scalability. ZFS is not an incremental improvement to existing technology; it is a fundamentally new approach to data management. We've blown away 20 years of obsolete assumptions, eliminated complexity at the source, and created a storage system that's actually a pleasure to use.
    • ZFS presents a pooled storage model that completely eliminates the concept of volumes and the associated problems of partitions, provisioning, wasted bandwidth and stranded storage. Thousands of filesystems can draw from a common storage pool, each one consuming only as much space as it actually needs. The combined I/O bandwidth of all devices in the pool is available to all filesystems at all times.
    • All operations are copy-on-write transactions, so the on-disk state is always valid. There is no need to fsck(1M) a ZFS filesystem, ever. Every block is checksummed to prevent silent data corruption, and the data is self-healing in replicated (mirrored or RAID) configurations. If one copy is damaged, ZFS will detect it and use another copy to repair it.
    • ZFS introduces a new data replication model called RAID-Z. It is similar to RAID-5 but uses variable stripe width to eliminate the RAID-5 write hole (stripe corruption due to loss of power between data and parity updates). All RAID-Z writes are full-stripe writes. There's no read-modify-write tax, no write hole, and — the best part — no need for NVRAM in hardware. ZFS loves cheap disks.
    • But cheap disks can fail, so ZFS provides disk scrubbing. Like ECC memory scrubbing, the idea is to read all data to detect latent errors while they're still correctable. A scrub traverses the entire storage pool to read every copy of every block, validate it against its 256-bit checksum, and repair it if necessary. All this happens while the storage pool is live and in use.
    • ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The pipeline operates on I/O dependency graphs and provides scoreboarding, priority, deadline scheduling, out-of-order issue and I/O aggregation. I/O loads thatbring other filesystems to their knees are handled with ease by the ZFS I/O pipeline.
    • ZFS provides unlimited constant-time snapshots and clones. A snapshot is a read-only point-in-time copy of a filesystem, while a clone is a writable copy of a snapshot. Clones provide an extremely space-efficient way to store many copies of mostly-shared data such as workspaces, software installations, and diskless clients.
    • ZFS backup and restore are powered by snapshots. Any snapshot can generate a full backup, and any pair of snapshots can generate an incremental backup. Incremental backups are so efficient that they can be used for remote replication — e.g. to transmit an incremental update every 10 seconds.
    • There are no arbitrary limits in ZFS. You can have as many files as you want; full 64-bit file offsets; unlimited links, directory entries, snapshots, and so on.
    • ZFS provides built-in compression. In addition to reducing space usage by 2-3x, compression also reduces the amount of I/O by 2-3x. For this reason, enabling compression actually makes some workloads go faster.
    • In addition to filesystems, ZFS storage pools can provide volumes for applications that need raw-device semantics. ZFS volumes can be used as swap devices, for example. And if you enable compression on a swap volume, you now have compressed virtual memory.
    • ZFS administration is both simple and powerful.


    This speak by itself, Ive seen 2 Demos HERE, and while the hardware support is not that great, I've decide to give it a try.  Note that linux may have a port of  ZFS port before July 2006, as it is a sponsored Google summer of code project.

    Raid @ home with opensolaris and ZFS Which Solaris flavor

    In Fact it is possible to use one of the following OpenSolaris distribution:
    • BeleniX is a *NIX distribution that is built using the OpenSolaris source base. It is currently a LiveCD distribution but is intended to grow into a complete distro that can be installed to hard disk. BeleniX has been developed out of Bangalore the silicon capital of India and it was born at the India Engineering Center of SUN Microsystems. And... it USE KDE: the est open source desktop.
    • SchilliX, a live CD.s
    • marTux, a live CD/DVD, for Sparc
    • Nexenta, a Debian-based distribution combining GNU software and Solaris' SunOS kernel
    • Polaris, a PowerPC port

    Status: stable, in development
    # Developers: __

    homepage Belenix logo
     version 0.4.3a
    Based on OpenSolaris
    • NFS,
    • SMB/CIFS,
    • HTTP/WebDAV
    • FTP
    Network directories support
    • ???
    Software Raid 0,1,5,6
    Hardware Raid
    Interface None
    • Remote login is deactivated but can be re-enable: You need to comment out the line "CONSOLE=/dev/console" in the file /etc/default/login to allow remote root login.
    • maybe VNC remote access.
    Size ??
    Can be installed
    • Live CD -> but mount point has to e recreated
    On hard disk only because of its size
    File system EXT2/EXT3, ZFS
    HardDrive ATA/SATA, SCSI, USB and Firewire
    Network not well...

    RAID @ home raid5  Installation

    Since belenix is a Live CD, and just for playing around with ZFS, it is more than enough.

    Raid @ home with opensolaris and ZFS Playing with ZFS

    Raid @ home with opensolaris and ZFS Future

    Raid @ home with opensolaris and ZFS Links and ressources