Search Results

Search found 59579 results on 2384 pages for 'data loss'.

Page 31/2384 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Using data.table to aggregate

    - by dayne
    After multiple suggestions from SO users, I am finally trying to convert my code over to using data.tables. library(data.table) DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10) > DT plate id val 1: plate1 CTRL 1 2: plate1 CTRL 2 3: plate1 ID1 3 4: plate1 ID2 4 5: plate1 ID3 5 6: plate2 CTRL 6 7: plate2 CTRL 7 8: plate2 ID1 8 9: plate2 ID2 9 10: plate2 ID3 10 What I would like to do is take the average of DT[,val] by plate when the id is "CTRL". I would normally aggregate the data frame, then use match to map the values back to a new column, 'ctrl'. Using the data.table package I can get: DT[id=="CTRL",ctrl:=mean(val),by=plate] > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 NA 4: plate1 ID2 4 NA 5: plate1 ID3 5 NA 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 NA 9: plate2 ID2 9 NA 10: plate2 ID3 10 NA What I need is really: DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10, ctrl = rep(c(1.5,6.5),each=5)) > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 1.5 4: plate1 ID2 4 1.5 5: plate1 ID3 5 1.5 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 6.5 9: plate2 ID2 9 6.5 10: plate2 ID3 10 6.5 Eventually I would like to use much more complicated selections of the values, but I do not know how to select specific values, run some function, then map those values back to the appropriate row using data frames.

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • ATG Live Webcast June 28: Scrambling Sensitive Data in EBS 12 Cloned Environments

    - by BillSawyer
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.   The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. This ATG Live Webcast is your chance to come learn about the Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack from the experts. Oracle E-Business Suite Release 12.1.3 Template for Data Masking The agenda for the Oracle E-Business Suite Template for Data Masking Pack webcast includes the following topics: What does data masking do in E-Business Suite environments? De-identify the data Mask sensitive data Maintain data validity How can EBS customers use data masking? References Join Eric Bing, Senior Director and Elke Phelps, Senior Principal Product Manager, as they discusses the Oracle E-Business Suite Template for Data Masking Pack.Date:                  Thursday, June 28, 2012Time:                 8:00AM Pacific Standard TimePresenters:     Eric Bing, Senior Director                           Elke Phelps, Senior Principal Product ManagerWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              100865To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  599097152If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Evaluating Oracle Data Mining Has Never Been Easier - Evaluation "Kit" Available

    - by chberger
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Now you can quickly and easily get set up to starting using Oracle Data Mining for evaluation purposes. Just go to the Oracle Technology Network (OTN) and follow these simple steps. Oracle Data Mining Evaluation "Kit" Instructions Step 1: Download and Install the Oracle Database 11g Release 2 Anyone can download and install the Oracle Database for free for evaluation purposes. Read OTN web site for details. 11.2.0.1.0 DB is the minimum, 11.2.0.2 is better and naturally 11.2.0.3 is best if you are a current customer and on active support. Either 32-bit or 64-bit is fine. 4GB of RAM or more works fine for SQL Developer and the Oracle Data Miner GUI extension. Downloading the database and installing it should take just about an hour or so, depending on your network and computer. For more instructions on setting up Oracle Data Mining see: http://www.oracle.com/technetwork/database/options/odm/dataminerworkflow-168677.html When you install the Oracle Database, the Sample Examples data should also be installed e.g.:Release 2 Examples win32_11gR2_examples.zip (565,154,740 bytes). Contains examples of how to use the Oracle Database. Download if you are new to Oracle and want to try some of the examples presented in the Documentation Step 2: Install SQL Developer 3.1 (the Oracle Data Mining Extension installs automatically) Step 3. Follow the four free step-by-step Oracle-by-Examples e-training lessons: Setting Up Oracle Data Miner 11g Release 2 This tutorial covers the process of setting up Oracle Data Miner 11g Release 2 for use within Oracle SQL Developer 3.0. Using Oracle Data Miner 11g Release 2 This tutorial covers the use of Oracle Data Miner to perform data mining against Oracle Database 11g Release 2. In this lesson, you examine and solve a data mining business problem by using the Oracle Data Miner graphical user interface (GUI). Star Schema Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform star schema mining against Oracle Database 11g Release 2. Text Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform text mining against Oracle Database 11g Release 2. That’s it! Easy, fun and the fastest way to get started evaluating Oracle Data Mining. Enjoy! Charlie

    Read the article

  • SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion

    - by pinaldave
    Earlier I talked about what kind of questions, I do not like when I get asked. Today we will go over the question which I like when I get asked the same. One of the reader practices various steps in my earlier blog post Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS. While reading the blog post he noticed that Data Quality Services is not providing very helpful suggestions. He wrote an email to me about it. Let us go over his email. “Pinal, I noticed in one of your images that DQS is not providing very helpful suggestions. First of all DQS should be able to make intelligent guesses and make the necessary correction by itself. If it cannot do the same, in that case, it should give us intelligent suggestions but in the image included here, I see the suggestions are not there as well. Why is it so? Would you please tell me how to increase the numbers of suggestion? I do understand this may not be preferable solution in many case but all the business cases go on it depends. There are cases when the high sensitivity required and there are cases when higher sensitivities are not required. I would like to seek your help here. –Sriram MD” This is indeed a great question. I see that Sriram understands that every system is different and every application has a different need. I will not have to tell him this most important concept. The question is about how to change the sensitivity of suggestions for correction in DQS. Well, this option is available under the configuration tab in the DQS client. Once you click on Configuration you will see the following screen. Click the Tab of General Settings. You will see the section of Interactive Cleansing. Under this second there is the first option of “Min score for suggestions”. As this is set to 0.7 every suggestion which matches 0.7 probabilities or higher probability are displayed under the suggestion tab. You can see in the following image that there is no suggestion as the min score for suggestions is set to 0.7 and there is no record which qualifies to that much confidence. Now let us change the value of Min Score for suggestion to 0.5. The lower value increased the confidence of DQS to give further suggestion to values which are over 0.5. However, in our case the suggestions which it provides are also accurate. This may not be true for your sample. Every sample is different so you should manually review it before approving them. I guess, this is a simple blog post to demonstrate how to change the confidence value for the suggestions which Data Quality Services provides. Use this feature with care and always tune it according to your datasets and record diversity. Reference: Pinal Dave (http://blog.SQLAuthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • unable to recover data from failed hdd

    - by Eslam Elyamany
    my hdd failing (or maybe totally dead) i've connected the hdd via USB but it doesn't appear in fdisk Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xe9fb38fb Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 40959999 20376576 7 HPFS/NTFS/exFAT /dev/sda4 40962046 976771071 467904513 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 82913280 86910975 1998848 82 Linux swap / Solaris /dev/sda6 86913024 394113023 153600000 7 HPFS/NTFS/exFAT /dev/sda7 40962048 82913279 20975616 83 Linux /dev/sda8 394122708 976768064 291322678+ 7 HPFS/NTFS/exFAT Partition 8 does not start on physical sector boundary. no sdc appears here , BUT it's appears on /dev/ rootghost-lap:/home/ghost# ls /dev/sd* /dev/sda /dev/sda2 /dev/sda5 /dev/sda8 /dev/sdb /dev/sdc1 /dev/sdc2 /dev/sdc6 /dev/sdc8 /dev/sda1 /dev/sda4 /dev/sda6 /dev/sda9 /dev/sdc /dev/sdc10 /dev/sdc5 /dev/sdc7 /dev/sdc9 also it appears in proc Code: rootghost-lap:/home/ghost# cat /proc/partitions major minor #blocks name 8 0 488386584 sda 8 1 102400 sda1 8 2 20376576 sda2 8 4 1 sda4 8 5 1998848 sda5 8 6 153600000 sda6 8 8 291322678 sda8 8 9 20975616 sda9 11 0 1048575 sr0 11 1 99136 sr1 8 32 244198583 sdc 8 33 14651248 sdc1 8 34 1 sdc2 8 37 15380480 sdc5 8 38 4153344 sdc6 8 39 48829536 sdc7 8 40 48829536 sdc8 8 41 110374551 sdc9 8 42 1975963 sdc10 and dmesg : [10604.777168] end_request: I/O error, dev sdc, sector 1 [10604.817238] sd 26:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [10604.817243] sd 26:0:0:0: [sdc] Sense Key : Aborted Command [current] [10604.817248] sd 26:0:0:0: [sdc] Add. Sense: No additional sense information [10604.817253] sd 26:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 02 00 00 06 00 ok now , let's see what i've tried testdisk to check for partitions -- failed dd to copy data from /dev/sdcX -- provide strange output size for example /dev/sdc1 is about 15G , the output for dd is 62G+ so i had to cancle it safecopy successfully made an image for partitons , but can't fix images, can't mount it, can't do any thing with it and some other tools i've tried and all failed , so any idea ?

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • OData to the rescue. Exposing the eventlog as a data feed

    - by cibrax
    In one of the project where I was working one, we used the Microsoft Enterprise Library Exception Application Block integration with WCF for logging all the technical issues on the services/backend in Windows Event Log. This application block worked like a charm, all the errors were correctly logged on the Event Log without even needing to modify the service code. However, we also needed to provide a quick way to expose all those events to the different system users so they could get access to all the them remotely. In just a couple of minutes I came up with a simple solution based on ADO.NET Data Services. ADO.NET data services is very powerful in this sense, you only need to provide a IQueryable implementation, and that’s all. You get a RESTful service with rich query support for free. In this sample, I used Linq to Objects to get the latest entries from the Event Log, and I also filter the entries by the category used by the Application Block to avoid loading unnecessary entries in memory. public class LogDataSource     {         string source;         public LogDataSource(string source)         {             this.source = source;         }         public LogDataSource()         {         }         public IQueryable<LogEntry> LogEntries         {             get { return GetEntries().AsQueryable().OrderBy(e => e.TimeGenerated); }         }         private IEnumerable<LogEntry> GetEntries()         {             var applicationLog = System.Diagnostics.EventLog.GetEventLogs().Where(e => e.Log == "Application")                 .FirstOrDefault();             var entries = new List<LogEntry>();             if (applicationLog != null)             {                 foreach (EventLogEntry entry in applicationLog.Entries)                 {                     if (source == null || entry.Source.Equals(source, StringComparison.InvariantCultureIgnoreCase))                     {                         entries.Add(new LogEntry                         {                             Category = entry.Category,                             EventID = entry.InstanceId,                             Message = entry.Message,                             TimeGenerated = entry.TimeGenerated,                             Source = entry.Source,                         });                     }                 }             }             return entries.OrderByDescending(e => e.TimeGenerated)                         .Take(200);         }     } LogEntry is class I created for this service to expose an Event Log Entry.     [EntityPropertyMappingAttribute("Source",         SyndicationItemProperty.Title,         SyndicationTextContentKind.Plaintext, true)]     [EntityPropertyMapping("Message",         SyndicationItemProperty.Summary,         SyndicationTextContentKind.Plaintext, true)]     [EntityPropertyMapping("TimeGenerated",         SyndicationItemProperty.Updated,         SyndicationTextContentKind.Plaintext, true)]     [DataServiceKey("EventID")]     public class LogEntry     {         public long EventID         {             get;             set;         }         public string Category         {             get;             set;         }         public string Message         {             get;             set;         }         public DateTime TimeGenerated         {             get;             set;         }         public string Source         {             get;             set;         }     } As you can see, I used the new feature “Friendly feeds” to map several properties in the entries with standard ATOM elements. The “DataServiceKey” is only necessary because I am using the Reflection Provider (the exposed IQueryable implementation is just Linq to Objects) rather than the default Entity Framework Provider. The data service implementation is also quite simple, just a couple of lines were needed to expose the data source created previously. public class LogDataService : DataService<LogDataSource>     {         public static void InitializeService(IDataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);         }         protected override LogDataSource CreateDataSource()         {             string source = ConfigurationManager.AppSettings["EventLogSource"];             if (source == null)             {                 throw new ApplicationException("The EventLogSource appsetting is missing in the configuration file");             }             return new LogDataSource(source);         }     } With this implementation in place, the final users not only get a feed with all the latest errors in the event log, but also support for performing queries against that data. This is one of the great things about ADO.NET Data services.

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • DVCS and data loss?

    - by David Wolever
    After almost two years of using DVCS, it seems that one inherent "flaw" is accidental data loss: I have lost code which isn't pushed, and I know other people who have as well. I can see a few reasons for this: off-site data duplication (ie, "commits have to go to a remote host") is not built in, the repository lives in the same directory as the code and the notion of "hack 'till you've got something to release" is prevalent... But that's beside the point. I'm curious to know: have you experienced DVCS-related data loss? Or have you been using DVCS without trouble? And, related, apart from "remember to push often", is there anything which can be done to minimize the risk?

    Read the article

  • Jquery data retrival returns null with json data

    - by user545520
    Hello Everyone, I have below sample json file: {"jsonData":{"REC":[{"TEST":"T","TEST1":"T1","TEST2":"T2"},{"R":"R","R1":"R1","R3":"R3"}], "DATA":{"FIRST":0,"SEC":1}}}. I want to retrieve data from json file,i am try like below,but it is giving null. from result object:i am retrieving data like below: to retrive the value T: this.jsonData.REC.TEST To retrieve the value R1: this.jsonData.DATA.FIRST Please correct me if i am doing any wrong. Thanks in Advance.

    Read the article

  • Android: Image displayed at Webview from Url with high quality loss

    - by Merlino
    I want to display an image from an url with an Webview at Android. With Android phones with Version 1.5 and 1.6 there is no problem. but the same pic and the same code on an AndroidPhone with Version 2.0 and the pic is totaly pixelated. Like Android is resizing the image first to a smaller one and then resizing it back to "normal" size. Unfortunately its important to display the pic without any quality loss. I tried to integrate it in the sourcefolder to show it as an normal image, but at Android 2.0 i get an exception because the image is to big. (At Android 1.6 there is no problem) Any ideas how i can display the image without quality loss with Android 2.0 ?

    Read the article

  • Is information a subset of data?

    - by Jason Baker
    I apologize as I don't know whether this is more of a math question that belongs on mathoverflow or if it's a computer science question that belongs here. That said, I believe I understand the fundamental difference between data, information, and knowledge. My understanding is that information carries both data and meaning. One thing that I'm not clear on is whether information is data. Is information considered a special kind of data, or is it something completely different?

    Read the article

  • Transform data in FMPXMLRESULT grammar into a "Content Standard for Digital Geospatial Metadata (CS

    - by Andrew Igbo
    I have a problem in FileMaker; I wish to link the METADATA element/FIELD element “NAME” attribute to its corresponding data in the RESULTSET element/COL element. However, I also wish to map the METADATA element/FIELD element “NAME” to "Content Standard for Digital Geospatial Metadata (CSDGM)" metadata elements Sample XML Metadata Record with CSDGM Essential Elements Louisiana State University Coastal Studies Institute 20010907 Geomorphology and Processes of Land Loss in Coastal Louisiana, 1932 – 1990 A raster GIS file that identifies the land loss process and geomorphology associated with each 12.5 meter pixel of land loss between 1932 and 1990. Land loss processes are organized into a hierarchical classification system that includes subclasses for erosion, submergence, direct removal, and undetermined. Land loss geomorphology is organized into a hierarchical classification system that includes subclasses for both shoreline and interior loss. The objective of the study was to determine the land loss geomorphologies associated with specific processes of land loss in coastal Louisiana.

    Read the article

  • MySQL data type: Text,,, Erroring: Data Too Long

    - by nobosh
    I have a field as follows in MySQL: Type: Text Length: 0 Decimals: 0 And when I try to insert data around the size of 4 pages of MS Word, Coldfusion errors with: Data Too Long from the DB. I thought TEXT data type was able to expand and handle this size of data? What am I missing and what can I do?

    Read the article

  • Data Access Layer - static list objects and caching

    - by Truegilly
    Hello, i am devloping a site using .net MVC i have a data access layer which basically consists of static list objects that are created from data within my database. The method that rebuilds this data first clears all the list objects. Once they are empty it then add the data. Here is an example of one of the lists im using. its a method which generates all the UK postcodes. there are about 50 methods similar to this in my application that return all sorts of information, such as towns, regions, members, emails etc. public static List<PostCode> AllPostCodes = new List<PostCode>(); when the rebuild method is called it first clears the list. ListPostCodes.AllPostCodes.Clear(); next it re-bulilds the data, by calling the GetAllPostCodes() method /// <summary> /// static method that returns all the UK postcodes /// </summary> public static void GetAllPostCodes() { using (fab_dataContextDataContext db = new fab_dataContextDataContext()) { IQueryable AllPostcodeData = from data in db.PostCodeTables select data; IDbCommand cmd = db.GetCommand(AllPostcodeData); SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = (SqlCommand)cmd; DataSet dataSet = new DataSet(); cmd.Connection.Open(); adapter.FillSchema(dataSet, SchemaType.Source); adapter.Fill(dataSet); cmd.Connection.Close(); // crete the objects foreach (DataRow row in dataSet.Tables[0].Rows) { PostCode postcode = new PostCode(); postcode.ID = Convert.ToInt32(row["PostcodeID"]); postcode.Outcode = row["OutCode"].ToString(); postcode.Latitude = Convert.ToDouble(row["Latitude"]); postcode.Longitude = Convert.ToDouble(row["Longitude"]); postcode.TownID = Convert.ToInt32(row["TownID"]); AllPostCodes.Add(postcode); postcode = null; } } } The rebuild occurs every 1 hour. this ensures that every 1 hour the site will have fresh set of cached data. the issue ive got is that occasionally if during a rebuild, the server will be hit by a request and an exception is thrown. The exception is "Index was outside the bounds of the array." it is due to when a list is being cleared. ListPostCodes.AllPostCodes.Clear(); - // throws exception - although its not always in regard to this list. Once this exception is thrown application dies, All users are affected. I have to restart the server to fix it. i have 2 questions... If i utilise caching instead of static objects would this help ? Is there any way i can say "while the rebuild is taking place, wait for it to complete until accepting requests" any help is most appricaiated ;) truegilly

    Read the article

  • Get value of "data"

    - by Nicole Loyal-Windham
    Hi, I need to figure out the value of data strings with jquery, for example like this: { label: "Beginner", data: 2}, { label: "Advanced", data: 12}, { label: "Expert", data: 22}, to add them up. Something like: var sum = data1+data2+data3; alert(sum); So the result for this example would be 36. Appreciate your help! Nicole

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >