Search Results

Search found 59692 results on 2388 pages for 'legacy data'.

Page 116/2388 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables

    - by pinaldave
    I love interesting conversations with related to SQL Server. One of my friends Madhivanan always comes up with an interesting point of conversation. Here is one of the conversation between us. I am very confident this blog post will for sure enable you with some new knowledge. Madhi: How do I know if any table has a uniqueidentifier column used in it? Pinal:  I am sure you know that you can do it through some DMV or catalogue views. Madhi: I know that but how can we do that without using DMV or catalogue views? Pinal: Hm… what can I use? Madhi: You can use table name. Pinal: Easy, just say SELECT YourUniqueIdentCol FROM Table. Madhi: Hold on, the question seems to be not clear to you – you do know the name of the column. The matter of the fact, you do not know if the table has uniqueidentifier column. Only information you have is table name. Pinal: Madhi, this seems like you are changing the question when I am close to answer. Madhi: Well, are you clear now? Let me say it again – How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is table name and you are allowed to return any kind of error if table does not have uniqueidentifier column. Pinal: Do you know the answer? Madhi: Yes. I just wanted to test your knowledge about SQL. Pinal: I will have to think. Let me accept I do not know it right away. Can you share the answer please? Madhi: I won! Here it goes! Pinal: When I have friends like you – who needs enemies? Madhi: (laughter which did not stop for a minute). CREATE TABLE t ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, data VARCHAR(60) ) INSERT INTO t (data) SELECT 'test' INSERT INTO t (data) SELECT 'test1' SELECT $rowguid FROM t DROP TABLE t This is indeed very interesting to me. Please note that this is not the optimal way and there will be many other ways to retrieve uniqueidentifier name and value. What I learned from this was if I am in a rush to check if the table has uniqueidentifier and I do not know the name of the same, I can use SELECT TOP (1) $rowguid and quickly know the name of the column. I can later use the same columnname in my query. Madhi did teach me this new trick. Did you know this? What are other ways to get the check uniqueidentifier column existence in a database? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Copying 500GB Data to EC2 Instances Local Drive

    - by iCode
    Please do not ask me why (they made me) but I have to copy 500GB data to the local drive every 200 node/instances that I am launching in EC2. For reasons beyond this post, this data must by on the local drive and not EBS drive so I can not benefit from snapshots. What is the fastest way that I can manage to this? Copying from S3 to each node takes a long time. I trying to attached an EBS volume to every node with the data and then copy the data from EBS to the local drive but that also take a long time (several hours_) Now, I am also thinking to use bit torrent but not sure how well it is going to be. What is the best way to copy 500GB of static data to each local drive of 200 ec2 instances? The 500Gb of data is composed several hundred of file with varying size but the biggest file is 20GB.

    Read the article

  • Recovered all previous data of partition in Windows

    - by Komal Sorathiya
    My whole drive data is lost because I installed ubuntu. After that, I formatted my Laptop and make new drives. in which, one partition is cant accessible because its a raw partition. I recovered some data from that partition using iCare Recovery Software. Then, I fully format that partition, and I put data in that place. I remove all files from that. My problem is that I want to recover my data which is deleted because of ubuntu.. I can recover the most recent data from iCare Recovery, but i cannot recover previous data.. Please, help me for that. I am trying this from many days. Thanks

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • SQL 2008 R2: Data\Log partions

    - by Reese Hirth
    I have a SQL Server setup that a previous IT person set up with a 2TB data partition and a 1TB log partition. The OS partition is 244GB and SQL is installed on a separate 1TB partition. We have an additional 8TB of storage that I would like the new IT staff to bring on line. He wants to create 4 new 2TB data partition. I see this as confusing. Can't we just backup the current data partition, blow it away, and create a new 10TB data partition I'm responsible for administering the data on the server but am not allowed to do the setup myself. This is a GIS server running ArcGIS Server with around 60 geodatabases ranging from 20BG to a couple that may grow to over a TB. So, 5-2TB data partitions or 1-10TB partition. Thanks for the advice.

    Read the article

  • /data/tmp on database server?

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. My teacher gave out an assignment which mentioned "copy cars.dat to /data/tmp on the MySQL database server" without any explanations, I do not know what is the "/data/tmp on database server" means exactly? Basically after that I need to execute SQL statement like LOAD DATA INFILE '/data/tmp/cars.dat' INTO TABLE cars So, what does copy cars.dat to /data/tmp on the database server means as there is no /data/tmp directory even? Personally, I checked /etc/mysql/my.cnf file, inside which there are definitions of : ... basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp ... Does it mean to copy cars.dat to the tmpdir which is just /tmp under root directory??

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • How to achieve redundancy across data centers?

    - by BrandonBT
    I have a LAMP server with a lot of hardware redundancy built in. I am not worried about the server becoming unavailable. What I am worried about, however, are potential network issues in the data center the server is in. What I would like to have is another server in another data center for redundancy. Load balancing is less of a concern. With that said, I am relatively clueless on two points: How to have two servers in two geographically separate data centers that have exactly the same data, in terms of both files and MySQL databases. How to ensure that all traffic coming into one data center are automatically transferred to the other database in the case of a network or server failure at the first data center. Any guidance on how to accomplish the above two problems would be greatly appreciated.

    Read the article

  • Recovering damaged external hard disk by installing internally

    - by nfarshchi
    I had a 1TB Western Digital (My book series) 3.5" USB3. One day, the SATA to USB3 converter board was damaged and has not worked since. I decided to open the cover and use the HDD as an internal HDD. When I attached the HDD to my PC and booted up in Windows, it asked me which type of ????? I want to use "MBR or GBR" (I dont remember the exact question) I chose MBR and Windows gave me a 1TB empty Hard drive. I tried to recover with recover my files and some other recovery programs but no success. Some one told me that you should choosed GBR instead of MBR . How can I do that now? Another guy told me that the SATA to USB3 converter board is coded to save data on HDD and you can not use them internally without losing data, and I should find another SATA to USB3 board (exact same). It is impossible to find because they are not produced any more. Please help me to find a solution to bring back my data. UPDATE I have 1TB WD "Mybook" USB 3. the board that convert sata to usb3 was damaged. so when the HDD was in the box computer did not recognize it. I opened the box and remove HDD to use it internal. after connecting to my PC windows showed me one massage that I had two choice MBR or GPT I choosed MBR one and windows gave me 1TB empty new volume. I tried many recovery software to recover my data but no success. I brought it to one expert recovery company and they told me the converter board (SATA to USB3) make some encryption on data and with out that board you cannot recover any thing. so I bought another empty WD box and put the HDD inside but even after that also there is no file. I tried to recover again in this state but no success. so I have some unanswered question. does this converted boards make any password or encryption? if yes how can I solve it? does using many recovery programs affected my data? any suggestion or solution for bring back my data? I had use recovery programs such as : recover my files , EaseUS data recovery, easy recovery, test disk, Ontrack easy recovery . Note: when I was using test disk it asked me to choose which partition table I want to use. as it was I choose NTFS, does this made any change on data?

    Read the article

  • Export data to Excel from Silverlight/WPF DataGrid

    - by outcoldman
    Data export from DataGrid to Excel is very common task, and it can be solved with different ways, and chosen way depend on kind of app which you are design. If you are developing app for enterprise, and it will be installed on several computes, then you can to advance a claim (system requirements) with which your app will be work for client. Or customer will advance system requirements on which your app should work. In this case you can use COM for export (use infrastructure of Excel or OpenOffice). This approach will give you much more flexibility and give you possibility to use all features of Excel app. About this approach I’ll speak below. Other way – your app is for personal use, it can be installed on any home computer, in this case it is not good to ask user to install MS Office or OpenOffice just for using your app. In this way you can use foreign tools for export, or export to xml/html format which MS Office can read (this approach used by JIRA). But in this case will be more difficult to satisfy user tasks, like create document with landscape rotation and with defined fields for printing. At this article I'll show you how to work with Excel object from .NET 4 and Silverlight 4 with dynamic objects and give you an approach which allow you to export data from DataGrid Silverlight and WPF controls. Read more...

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Writing algorithm on 2D data set in plain english

    - by Alexandre P. Levasseur
    I have started an introductory Java class and the material is absolutely horrendous and I have to get excellent grades to be accepted into the master's degree, hence my very beginner question: In my assignment I have to write algorithms (no pseudo-code yet) to solve a board game (Sudoku). Essentially, the notes say that an algorithm is specification of the input(s), the output(s) and the treatments applied to the input to get the output. My question lies on the wording of algorithms because I could probably code it but I can't seem to put it on paper in a coherent way. The game has a 9x9 board and one of the algorithms to write has to find the solution by looking at 3 squares (either horizontal or vertical) and see if one of the three sub-squares match the number you are looking for. If none match then the number you are looking to place is in one of the other 2 set of 3 sub-squares (see image to get a better idea). I really can't get my head around how to formulate the solution into the terms described above or maybe it's just too simple, here's what I was thinking: Input: A 2-dimensional set of data of size 9 by 9 to be solved and a number to search for. Ouput: A 2-dimensional set of data of size 9 by 9 either solved or partially solved. Treatment: Scan each set of 3x9 and 9x3 squares. For each line or column of a 3x3 square check if the number matches a line (or column). If it does then move to the next line (or column). If not then proceed to the next 3x3 square in the same line (or column). Rinse and repeat. Does that make sense as an algorithm written in plain english ? I'm not looking for an answer to the algorithm per se but rather on the formulation of algorithms in plain english.

    Read the article

  • A KSH adattárháza: Oracle Essbase és Oracle Database alapon

    - by Fekete Zoltán
    A magyar Központi Statisztikai Hivatal metaadat vezérelt adattárháza három fontos Oracle terméken nyugszik. Az interneten elérhetok az adatok a KSH Tájékoztatási adatbázis-ból. Data from KSH in English. Amikor ezeket a sorokat írom, péntek éjjel 21:36-kor 81 online felhasználó kérdezte le az adatokat. :) - Oracle Essbase multidimenziós OLAP szerver, technikai infó - Hyperion Interactive Reporting lekérdezo eszköz, technikai infó - Oracle Database Enterprise Edition Az angol nyelvu customer snapshot, azaz ügyfél történet: Hungarian Central Statistical Office Provides 200,000 External Users with Secure Online Access to Data. A magyar nyelvu sikersztori: A KSH statisztikai adatainak 60 százaléka elérheto böngészo és platform függetlenül évi mintegy 200 000 internetes felhasználó számára. A termék kiválasztásában, a projekt kialakításában és bevezetésében nagy szerepet vállalt a DSS Consulting Kft. és az Oracle Konzultáció. A projekt során elért legfontosabb eredmények: - adattárház: 150-200 egyideju felhasználó, éves szinten 200 000 felhasználót jelent - Essbase memória alapú tárolási struktúrája: közel valósideju hozzáférés - A rendszer platform és böngészo független, ezért a felhasználók széles köre érheti el a statisztikai adatokat. - Natív Java API és XMLA támogatással egyedi karbantartó alkalmazás - A statisztikus munkatársak speciális informatikai eloképzettség nélkül építik fel és gondozzák a multidimenzionális adatbázisokat - Az Oracle Hyperion Interactive Reporting: oszlopos, kereszttáblás, szekcionált, grafikonos, webes lekérdezések Letöltheto a következo KSH eloadás a HOUG konferenciáról 2009-bol: Hyperalea iacta est - a KSH Essbase alapú adattárház rendszere. A most megjelent sikersztori: angolul és magyarul.

    Read the article

  • Manage a flexible and elastic Data Center with Oracle VM Manager (By Tarry Singh - PACKT Publishing)

    - by frederic.michiara
    For the ones looking at an easy reading and first good approach to Oracle VM Manager and VM Servers, I would recommend reading the following book even so it was written for 2.1.2 whereas we can use now Oracle VM 2.2 : Oracle VM Manager 2.1.2 Manage a Flexible and Elastic Data Center with Oracle VM Manager Learn quickly to install Oracle VM Manager and Oracle VM Servers Learn to manage your Virtual Data Center using Oracle VM Manager Import VMs from the Web, template, repositories, and other VM formats such as VMware Learn powerful Xen Hypervisor utilities such as xm, xentop, and virsh A practical hands-on book with step-by-step instructions Oracle VM experts might be frustrated, but to me it's not aim to Oracle VM experts, but to the ones who needs an introduction to the subject with a good coverage of all what you need to know. This book is available on https://www.packtpub.com/oracle-vm-manager-2-1-2/book Need to find out about Table of contents : https://www.packtpub.com/article/oracle-vm-manager-2-1-2-table-of-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/sample_chapters/7122-oracle-virtualization-sample-chapter-4-oracle-vm-management.pdf Read also articles from Tarry Singh on http://www.packtpub.com/ : Oracle VM Management : http://www.packtpub.com/article/oracle-vm-management-1 Extending Oracle VM Management : http://www.packtpub.com/article/oracle-vm-management-2 Hope you'll enjoy this book as a first approach to Oracle VM. For more information on Oracle VM : Oracle VM on n OTN : http://www.oracle.com/technology/products/vm/index.html Oracle VM Wiki : http://wiki.oracle.com/page/Oracle+VM Oracle VM on IBM System x : http://www-03.ibm.com/systems/x/solutions/infrastructure/erpcrm/oracle/virtualization.html

    Read the article

  • Increase Performance and Agility with Oracle’s New Data Center Fabric Solutions

    - by Cinzia Mascanzoni
    Join this Webcas on  Tues., December 11, 2012 10 a.m. PT / 1 p.m. ET and hear from S.K. Vinod, Senior Director of Product Management, Oracle Virtual Networking products. He’ll show you how the fast, simple, and agile architecture of Oracle Fabric Interconnect provides dynamic network and storage connectivity to thousands of servers. You will see how to use Oracle Software Defined Network (SDN) to connect any resource on the data center fabric quickly—without incurring downtime or requiring network reconfiguration. With Oracle Virtual Networking products, you can: Streamline your data center connectivity Reduce complexity by 70% Cut infrastructure expenses by up to 50% Increase application performance up to 30x Provision new services and reconfigure resources in minutes  Simplify deployments with wire-once infrastructure  During the Webcast, you’ll also have the opportunity to chat directly with Oracle experts. Visit OPN's Server & Storage Systems Knowledge Zones anytime to learn about partner engagement, training, resources, and replays of other webcasts to jump start business.  You can also email us your questions. Unable to attend live? Register anyway – we'll send you the on-demand link to the Webcast!

    Read the article

  • Cross Platform Data Access with Xamarin & C# For iPhone, iPad, and Android - Local, Web Services, & Sql Server

    - by Wallym
    The following is a link to cross platform data access training with Xamarin & C#.   It is intended for use on iPhone, iPad, and Android devices.  The course covers local data in Sqlite, calling Web Services via REST and JSON, and calling Sql Server. Url: http://www.learnnowonline.com/course/cpx2/xamarin-cross-platform-data-access/  Course Data  Applications live on data. These applications can vary from an online social network service, to a company’s internal database, to simple data, and all points in between. This Course will focus on how to easily access data on the device, communicate back and forth with a web service, and then finally to a SQL server database. Outline Local Data (27:36) Introduction (00:36) Problem (01:57) Solution (02:01) LINQ (02:03) LINQ Status (00:48) SQLite (02:18) SQLite - .Net Developers (00:50) SQLite-net (01:07) SQLite-net Attributes (02:10) Getting Started (01:09) CRUD (01:05) SQLite Platforms (01:17) Demo: SQLite – Android (04:53) Demo: SQLite – iOS (04:56) Summary (00:20) Web Services Data (32:43) Introduction (00:19) Async Commands (03:15) HttpClient (01:26) HTTP Verbs (01:29) Notes (00:58) GET Operation (01:37) JSON.NET (01:50) Images (01:16) Other Http Verbs (01:27) Post (03:18) Demo: Http – iOS prt1 (05:26) Demo: Http – iOS prt2 (05:28) Demo: Http – Android (04:20) Summary (00:27) Direct Data (12:33) Introduction (00:23) Remote Data - Direct (02:47) Sql Server (01:15) Demo: Sql Server – iOS (04:15) Demo: Sql Server – Android (01:49) "codepage 1252 not supported" (01:03) Other Resources (00:43) Summary (00:15) Note: Thanks to Frank Kreuger for his data access library Sqlite-Net.  It is very helpful and I have used it in some other projects beyond just this training session.

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • Is It Possible To Recover A Partial LVM Logical Volume?

    - by Terry Wang
    Background It is an Ubuntu 12.04 VirtualBox VM with 5 virtual HDDs (VDI), NOTE this is just a test VM, so not well planned ahead: ubuntu.vdi for / (/dev/mapper/ubuntu-root AKA /dev/ubuntu/root) and /home (/dev/mapper/ubuntu-home) weblogic.vdi - /dev/sdb (mounted on /bea for weblogic and other stuff) btrfs1.vdi - /dev/sdc (part of btrfs -m raid1 -d raid1 configuration) btrfs2.vdi - /dev/sdd (part of btrfs -m raid1 -d raid1 configuration) more.vdi - /dev/sde (added this virtual HDD because / ran out of inodes and it wasn't easy to figure out what to delete so as to free up inodes, so I just added the new virtual HDD, created PV, added it to existing volume group ubuntu, grew the root logical volume to work around the inode issue -_-) What happened? Last Friday, before finishing up I wanted to free up some disk space on that box, for some reason I thought the more.vdi was useless and tried to detach it from the VM, I then clicked delete (should have clicked keep files damn!) by mistake when detaching. Unfortunately I didn't have backup for it. All too late. What I have tried Tried to undelete (use testdisk and photorec) the vdi files but it takes too long and recovered heaps of .vdi files that I didn't want (huge, filled the disk, damn!). I finally gave up. Fortunately most of data is on separate ext4 partition and btrfs volumes. Out of curiosity, I still tried to mount the logical volumes and see if it is possible to at least recover the /var and /etc I tried to use system rescue cd to boot and activate the volume groups, I got: Couldn't find device with uuid xxxx. Refusing activation of the partial LV root. Use --partial to override. 1 logical volume(s) in volume group "ubuntu" now active. I was able to mount home LV but not root LV. I am wondering if it is possible to access the root LV any more. Under the bonnet, data (on LV root - /) was striped to more.vdi (PV), I know it's almost impossible to to recover. But I am still curious about how system administrator/DevOps guys deal with this sort of situation;-) Thanks in advance.

    Read the article

  • Tools for modelling data and workflows using structured text files

    - by Alexey
    Consider a case when I want to try some idea of an application. But I want to avoid investing a lot of effort in coding UI/work flows/database schema etc before I see that it's going to be useful to me (as example of potential user). My idea is stay lightweight and put all the data in text files. So the components could be following: Domain objects are represented by text files or their fragments Domain objects are grouped by their type using directories Structure the files using some both human- and machine-friendly format, e.g. YAML Use some smart text editor (e.g. vim, emacs, rubymine) to edit and navigate those files Use color schemes and macros/custom commands of the text editor to effectively manipulate those files Use scripts (or a lightweight web framework like Sinatra) to try some business logic ideas on top of the data model The question is: Are there tools or toolkits that support or can be adopted to this approach? Also any ideas, links to articles/other knowledge sources are very welcome. And more specific question: What is the simplest way to index and update index of files with YAML files?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • What is the Sarbanes-Oxley (SOX) Act?

    In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The SOX act targets the financial side of companies, but its impacts can be seen within the technology arena as well because it is their responsibility to store all of a company’s electronic records regardless of file type. This act specifies that all records and electronic messages must be saved for no less than five years according to SearchCIO. In addition, consequences for non-compliance are fines, imprisonment, or both. Sarbanes-Oxley Act: Rules that affect the management of Electronic records according to SearchCIO. Allowed practices regarding destruction, alteration, or falsification of records. Retention period for records storage. Best practices indicate that corporations securely store all business records using the same guidelines set for public accountants. Types of business records that need to be stored Business Records  Business Communications Including Electronic Communications References: SOXLaw: The Sarbanes-Oxley Act 2002 Retrieved May 2011 from http://www.soxlaw.com/ SearchCIO: What is Sarbanes-Oxley Act (SOX)? Retrieved May 2011 from http://searchcio.techtarget.com/definition/Sarbanes-Oxley-Act

    Read the article

  • Microsoft MVP Award &ndash; Data Platform Development

    - by Dane Morgridge
    For those who don't already know, yesterday I received my first Microsoft MVP Award in Data Platform Development.  With less than 5,000 MVPs in the world overall and about 20 in the Data Platform category, saying I am honored would be an understatement.  From the first time I spoke at a code camp, I was totally hooked and have had a blast travelling around the east coast speaking at code camps and users groups.  I'd like to take the time to thank Dani Diaz (@danidiaz) for the nomination and everyone who supported me, especially my wife Lisa for letting me travel and speak as much as I have and putting up with me for late nights and such.  Roska Digital, my employer, also deserves a shout out for supporting me and giving me the necessary time off to get to speaking engagements.  With any luck, the next year will be at least as fun if not more than the last one has.  I hope to see you at a code camp or user group meeting soon! I would also like to send a congratulations to the other new Philly Area MVPs: John Angelini (@johnangelini) & Ned Ames (@nedames) You can find out more about the Microsoft MVP Award at https://mvp.support.microsoft.com/

    Read the article

  • Data transfer between"main" site and secured virtual subsite

    - by Emma Burrows
    I am currently working on a C# ASP.Net 3.5 website I wrote some years ago which consists of a "main" public site, and a sub-site which is our customer management application, using forms-based authentication. The sub-site is set up as a virtual folder in IIS and though it's a subfolder of "main", it functions as a separate web app which handles CRUD access to our customer database and is only accessible by our staff. The main site currently includes a form for new leads to fill in, which generates an email to our sales staff so they can contact them and convince them to become customers. If that process is successful, the staff manually enter the information from the email into the database. Not surprisingly, I now have a new requirement to feed the data from the new lead form directly into the database so staff can just check a box for instance to turn the lead into a customer. My question therefore is how to go about doing this? Possible options I've thought of: Move the new lead form into the customer database subsite (with authentication turned off). Add database handling code to the main site. (No, not seriously considering this duplication of effort! :) Design some mechanism (via REST?) so a webpage outside the customer database subsite can feed data into the customer database I'd welcome some suggestions on how to organise the code for this situation, preferably with extensibility in mind, and particularly if there are any options I haven't thought of. Thanks in advance.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >