In this series of articles, Jeffrey O. Grady, author of “System Verification,” delineates the basics of requirements planning and analysis, an important tool for using Agile programming techniques to achieve better code quality and reliability in complex embedded systems software and hardware projects. Part 3: Performance requirements analysis.
View the full article HERE
The accelerating need for ever higher data rates and serial I/O density sets demanding performance requirements for current and next generation SerDes transceivers. The PLL is the key to determining high speed link capabilities, since high quality clocks are required to meet bit error rate (BER) specifications of 10-12 to 10-15. An ultra-low jitter wideband LC PLL has been developed to meet the exacting requirements of today’s systems.
View the full article HERE
Pain of the memory that can’t be born
Our world is submerging in the machine format ocean gradually. The mighty torrent of information brings the magnanimity data, this is an opportunity, it is a challenge too. On one hand, the data have value stored for a long time, we must keep a lot of old data, on the other hand also face new machine format growth. According to ZDNet investigation result regarding users stored the sore spot, 21% of the users are being increased by the memory capacity and perplexed too fast, and there is 31% annual data increasing degree of enterprise among 30%- 40% among them, and the enterprises of over 40% of increasing degree account for more than 35% of interviewees. As increasing the machine format owner constantly in a large amount, the difficult problem that the old and new system brings is outstanding day by day too.
Investigate that can see according to this user’s sore spot, about 40% of data hold of snowball, it causes system performance to be unable to satisfy the demands thus potential, what bulky data to lack valid backup means and is dealt with, and isomerize the management burden further aggravating users systematically
According to this investigation result, more than 50% of users’ storage equipment systems are all unable to satisfy the demands. Most storage systems of people have been already used for many years among them, the old system can’t meet performance requirements without can abandon, use immediately as enterprises ” The legacy ” ,And isomerize and non- standardized phenomena and generally exist, according to the investigation result, the users adopting the system of 3 manufacturers at the same time have accounted for 40%, and only depend on only 5% of enterprise of the products of a manufacturer, most enterprises are using the polytype systems from many manufacturers, already the unitary standardized platform of none, the products that and even the same manufacturer’s different series all have a problem of interoperability.
Secondly, the importance of data assets is self-evident, becomes particularly important how to protect these data. However, a lot of users have not made a piece of reasonable backup tactics, has led to the fact the data bulk accumulates over a long period and makes huger and huger, in addition not utilized appropriate data compaction technology, cause the redundant data to be packed with the whole environment, have not reached and protected machine format original intention, bring heavier burden instead.
When a great deal of difficult problems and challenges are confronted at the same time, the classic method has been already no longer effective even condemned and grown thickly, so, users must raise and store efficiency by reforming and storing the framework by appropriate technology and means. Then how to reform and store the framework? Since the budget is limited, add and can’t store the problem solving fundamentally briefly, then regard improve characteristics and does not lose a kind of ideal route ing as optimally above existing resources.
Memory management calls automation
Except IT budget being limited, the human resources are different to allocate too, if optimize and manage manually through the controller, efficiency is extremely low, will consume a large amount of time and do useless work instead without careful planning. So, it is to utilize the unattended way to store optimally that the reformation that we say here stores one of the key principles of the framework, replacing by automatic processing of the software may consume a large number of artificial work.
If users have much application, the old system will usually give and employ excessively to distribute the storage space, and a lot of capacity is in idle state, so should assign the capacity to be the efficient method in fact according to the actual demand. But let the intersection of controller and moment pay close attention to the fact the intersection of capacity and operating position and assign storage space to consume time and energy very much obviously for employ manually, if simplify and dispose in this way automatically technology can distribute the storage space automatically, and will remind users automatically, avoid influencing the normal operation of the business before the capacity is used up.
A lot of users had needed that is frequently visited ” The focus ” The demands of data and high IO, on one hand, the hot data of artificial perception aggravate the controller’s burden promptly, need quite careful planning, on the other hand, it is possible for us to just offer high IO by increasing the quantity of hard disk in the past, but the actual data bulk is not so large, cause the utilization factor of the capacity to be even fewer than 10%. At this time, cooperate with the solid condition one SSD The technical advantage of automatic zoning used will be prominent, the software is automaticallied process and tracked, let us can visit frequent data, keep in high-speed storage medium or important data, visit the data lower in frequency to put in characteristic and cost relatively lower medium, reduce the handling cost while raising assets use.
To backup user of the magnetic disc, each time finishing the backups, all know a large number of of backup with the same file and data before this, produce the same and a plurality of machine format duplicates, with time lapse, the redundant data will be flooding the valuable disk space, it is obviously impractical for the controller to position and delete the redundant data manually at this time, need a kind of unattended data compaction tool to reduce the data bulk of backup. Moreover, no matter the backup inside the data centre, or the long-range backup taking disaster-tolerance as purpose, need to take up a large number of bandwidth, repeat the data and delete technology and compare the new backup data with backup data before this, dispel the redundant data, reduce to the above two kinds of transmissive bandwidth demands, thus accelerate the backup speed.
According to investigating, 26% of the users think the main difficult problem of the backup lie in the enormous data bulk, have just lengthened the time of backup accordingly. So solve primary task of the backup, it is to simplify the data bulk of backup, shorten the backup window
In a word, in cases the expenses deflation of present IT, human resources are definite, increase the data bulk newly to catch up with and surpass for previous year every year, the outmoded storage system was unable to meet the performance requirement, newly-increased data and old data should be all managed, if none’s effective and unattended management tool, will then bring higher requirement and difficulty to controller, then when users will need to purchase the new storage equipment inevitably, can choose the storage system with automatic function, relieve all sorts of challenges and difficult problems. Now we will introduce master some and can bring automatic characteristic, technology and means to store optimally effectively in detail.
In recent to in the the intersection of cloud and assumed all concern, store and is considered as the basic platform more. Even to this day, a lot of cloud calculate offer, only limited to the nuclear set in CPU, the quantitative memory is distributed, the slow speed is stored, or there is some Internet -oriented IP technology. Recently, interesting calculating and storing relevant advanced technology with the cloud has appeared, especially use Web Services access mode, so as to visit and store and is not limit to the load point of apparatus file or NFS again.
Typical data storage and management ” The enterprise layer characteristic ” Pushing to make new advances constantly in IT framework innovates. Store a framework teacher to realize these characteristics are very important to key business and production application, but the present cloud calculates that still lacks these characteristics. The goal of the white paper of this page is to describe that stores 9 indispensable key elements in the cloud calculates in enterprises.
The key element 1: Characteristic
The characteristic demands cost. In a application with good framework, characteristic and cost are in equilibrium state. The key which reaches this point is to use appropriate technology to match the applied characteristic of enterprise’s business, demand to convert the business languages of enterprises to IT mode at first. Because this kind of conversion is difficult, usually enterprises halt in the static IT framework, can’t tackle the performance requirements that business changes. The cloud has calculated and offered a platform that can tackle the performance requirements changed even more in enterprises.
In the Er early cloud computing platform, storing I/O will generally have a higher delay. This is because the manufacturer notices that makes the assumed data of cloud easier to visit, but not noticed that improve the service rank correlated to characteristic, bandwidth and IOPS. Two reasons cause the great of delay: Visit mode and type, and store the distributed disposition.
Visit the mode including association located in multi-layer agreement above OSI model physical layer such as SOAP, NFS, TCP, IP and FCP . The access to data includes the physical service layer shared ‘ Such as Ethernet With several protocol layers ‘ Such as SOAP or NFS ,Generally than the specialized physical layer ‘ Such as FC Produce more delays. Most cloud computing platforms include the access to data of Internet, has produced more delays of access to data on the market.
As to the storage medium, most cloud assumed markets use SATA magnetic disc while disposing in RAID or JBOD. Because of SATA ‘ Have the period of time to consider as and near the line magnetic disc Characteristic so so than enterprise magnetic disc Generally point FC one Lack of some slightly, cause the characteristic of the storage equipment to be lower than the applied demand.
When you adopt low-performance storage medium relatively low bandwidth and high delayed visit mode, the enterprises which use the whole storage subsystem are unable to support more applied demands of key business. Its result is, this kind of scheme is usually only suitable for testing and developing.
Compare to say, enterprises cloud computing platform need offering the choice of more different characteristic storing layers. As the change of the performance requirements, for example, employs it from testing and moving to the production environment, storing the platform should be able to use this kind of change. There should be many kinds of characteristics areas in the assumed memory of ideal enterprise cloud, can be adjusted, in order to offer the appropriate I/O characteristic rank for the demand of a characteristic of the business.
Finally, in order to meet the performance requirements that enterprise’s high side stores, the cloud calculation scheme must adopt the enterprise layer technology that is higher than or used at present. Generally use FC SAN. In addition, how operation technique and the technology are equally important. In a system management environment, the virtual machine under the enterprise layer demand can offer high performance to dispose continuously.