Search Results

Search found 58624 results on 2345 pages for 'hierarchical data'.

Page 196/2345 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • Data replication between two web nodes

    - by HTF
    I have Wordpress installation running on two web servers (Nginx). There is unidirectional synchronization from server A to server B and I'm using lsyncd for this purpose. with his configuration I have to add blog posts from the first web server so the data is replicated to the second one - how I can force access to Wordpress back-end only from the first web server? Please note that both servers have the same domain for Wordpress. Regards

    Read the article

  • Storing data, cost/gigabyte

    - by Micaela
    Can anyone give me a general estimate for what web-hosts charge for data storage ($/gigabyte)? A shared-webhosting service is what I'm referring to. I have been trying to compare the price for storage offered by business process automation SaaS and now I'm looking more general.

    Read the article

  • mysqldump is not dumping my data

    - by oompahloompah
    I am running mysqldump on Ubuntu Linux (10.0.4 LTS) my mySQL version info is: mysql Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486) using readline 6.1 I used the following command: mysql -u username -p dbname dbname_backup.sql However when I opened the generated .sql file, I saw that most of the tables had only the schema dumped and in the few cases where the actual data was dumped, only 1 or two records were dumped (there are ATLEAST several tens of records in each table). Does anyone know what maybe going on?

    Read the article

  • Why does a hard disk suddenly look to Windows as if it "needs to be formatted"?

    - by pufferfish
    This is more of a theory question, but what are the reason(s) for a disk to suddenly cause Windows to start saying it "needs to be formatted"? It happens to an IDE disk that I have in a cheap external enclosure, and I can usually get most of the data back by using software like recuva. It's now happened to an internal disk I have. I'm not looking for software to fix this (although links would be appreciated), but rather a low-level explanation as to what gets corrupted on the disk.

    Read the article

  • gunzip: invalid compressed data--format violated

    - by Arunjith
    Problem definition: I transferred a tar.gz file from a Linux machine to a Windows partition.The Windows partition has mounted with the Linux server as cifs. OS : Red Hat Enterprise Linux Server release 5 Symptom: After the copy process is successful, doing an integrity check with gunzip -t and the process get the following error: gunzip -t Backup-28--Jun--2011--Tuesday.tar.gz gunzip: Backup-28--Jun--2011--Tuesday.tar.gz: invalid compressed data--format violated And further tried to untar (tar -xvzf) and the process as well is failed.

    Read the article

  • Data Sources (ODBC) hangs when trying to create a new database connection

    - by FredrikD
    When I try to create a new database connection, the Data Sources (ODBC) programs hangs or takes a very long time to find the list of available SQL Servers. This only happens when there are other computers on the network, when my machine (a standard Windows 7 laptop) is alone, it works just fine. My question is: What should I look for in terms of SQL server or ODBC configurations that will take away this random behaviour?

    Read the article

  • Can't recover hard drive

    - by BreezyChick89
    My drive got corrupt after a thunderstorm. It used to be 1 partition of 2.5tb but now it shows 2 partitions. It's weird because 300gig free space is about how much it had before corrupting, but it was part of the first partition. I tried $ sudo resize2fs -f /dev/sdb1 Resizing the filesystem on /dev/sdb1 to 536870911 (4k) blocks. resize2fs: Can't read an block bitmap while trying to resize /dev/sdb1 Please run 'e2fsck -fy /dev/sdb1' to fix the filesystem after the aborted resize operation. sudo e2fsck -f /dev/sdb1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! Abort? n .... Error reading block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes Force rewrite<y>? yes Error writing block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes ... A lot of these. I can't use e2fsck -y because the first question aborts if I say "y". If I put a weight on the 'y' key it fails because none of the errors were really fixed. I asked this question before and tried using gparted but gparted fails because the first thing it does is: e2fsck -f -y -v /dev/sdb1 giving the same error. The disk status says healthy. There are no bad blocks. This is very frustrating because I can see the data in testdisk and it looks like it's all there. I already bought another 2.5tb drive and made a clone using dd. The next step if I can't fix this is to wipe that drive and just move the data with testdisk, but it seems certain folders will copy infinitely until the drive is full because of symlinks or errors so it's also a difficult option. sudo fdisk -l Disk /dev/sdb: 2500.5 GB, 2500495958016 bytes 255 heads, 63 sectors/track, 304001 cylinders, total 4883781168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005da5e Device Boot Start End Blocks Id System /dev/sdb1 * 2048 4294969342 2147483647+ 83 Linux sudo badblocks -b 4096 -n -o badfile /dev/sdb 610471680 536870911 badfile is empty I also tried changing the superblock with "fsck -b" but all of them are the same.

    Read the article

  • How to Export/Transfer DHCP data ?

    - by sreevatsa
    We have a very old server HP ML110 its giving hardware ( Power )trouble and we are hosting DHCP services on this on windows 2000 . Now i would like to transfer all the DHCP data ( it has reserved IP ) from this old server to a new server which is win2003 . How do i do ?

    Read the article

  • Preventing users from deleting SQL data

    - by me2011
    We just purchased a program that requires the users to have an account in the MS SQL server, with read/write access to the program's database. My concern is that since these users will now have write access to the database, they could directly connect to the SQL server outside of the program's client and then mess with the data directly in the tables. Is there anyway I can prevent access to the database while still allowing access via the client program?

    Read the article

  • Mac: Resize windows partition w/o destroying data?

    - by jbehren
    Is there a method/utility to actively resize the partitions on a dual-boot macbook air, without destroying the contents? I made the Windows Partition too small initially, and all the places I've looked have stated that resizing now using bootcamp will destroy all data on the Win7 Partition. I would prefer free, but I'm open to a reasonably priced utility that can grow the Win7 partition into the available space (I can use bootcamp to shrink the OSX partition without any problems).

    Read the article

  • Moving Data from One Column into Six Columns

    - by Alex Rudd
    I have an Excel sheet that has six columns that are currently all combined into one column. I need to separate them out but the issue is the first column is words that sometimes are one word and sometimes two. Here is an example: Twin 70 442 186 310 221 Twin Futon 70 389 160 272 195 XL twin 70 463 196 324 231 XL Twin Futon 70 418 174 293 209 Double 100 590 245 413 295 How can I separate these data sets while keeping the words all in the same columns?

    Read the article

  • Storing hierarchical (parent/child) data in Python/Django: MPTT alternative?

    - by Parand
    I'm looking for a good way to store and use hierarchical (parent/child) data in Django. I've been using django-mptt, but it seems entirely incompatible with my brain - I end up with non-obvious bugs in non-obvious places, mostly when moving things around in the tree: I end up with inconsistent state, where a node and its parent will disagree on their relationship. My needs are simple: Given a node: find its root find its ancestors find its descendants With a tree: easily move nodes (ie. change parent) My trees will be smallish (at most 10k nodes over 20 levels, generally much much smaller, say 10 nodes with 1 or 2 levels). I have to think there has to be an easier way to do trees in python/django. Are there other approaches that do a better job of maintaining consistency?

    Read the article

  • The subscription model behind CSS selectors?

    - by Martin Kristiansen
    With CSS selectors a query string body > h1.span subscribes to a specific type of nodes in the tree. Does anyone know how this is done? Selectors for transformations, how does the browser select the result set? And is there a trick to making it efficient? I imagine there being some sort of hierarchical type-tree for the entire structure to which the nodes subscribe and which is what is used when doing the selector queries — but this is only a guess. Does anyone know the real answer? Or even more interesting, what would be the best way to do dynamic lookups on a tree based on jQuery/CSS search queries?

    Read the article

  • Hierarchical Data in MySQL is as fast as XML to retrieve?

    - by ajsie
    i've got a list of all countries - states - cities (- subcities/villages etc) in a XML file and to retrieve for example a state's all cities it's really quick with XML (using xml parser). i wonder, if i put all this information in mysql, is retrieving a state's all cities as fast as with XML? cause XML is designed to store hierarchical data while relational databases like mysql are not. the list contains like 500 000 entities. so i wonder if its as fast as XML using either of: Adjacency list model Nested Set model And which one should i use? Cause (theoretically) there could be unlimited levels under a state (i heard that adjacency isn't good for unlimited child-levels). And which is fastest for this huge dataset? Thanks!

    Read the article

  • Are there C# controls that can be used to create a hierarchical list of prioritised items?

    - by Mendokusai
    I need to be able to display and edit a hierarchical list of tasks in a C# app. It can either be a Windows form app, or ASP.NET. Basically, I want similar behaviour to the way Microsoft Project handles tasks. The control would need to: 1) Maintain a list of items made up of several fields 2) Each item can have a number of children (at least 3 levels of nesting) 3) It needs to be very easy to change the parents/children of an item 4) It needs to be very easy to edit the fields (as fast as changing cells in Excel) 5) It needs to be very easy to reorder the items by dragging and dropping or cut and paste 6) If I can easily connect the control to a database, even better Before I go and create something manually, I'm wondering if there is something available already?

    Read the article

  • print hierarchy data(adjacency list model) in a list(ul/ol/li)

    - by adi
    I have adjacency list model like on the page http://dev.mysql.com/tech-resources/articles/hierarchical-data.html i have make a full table containing all data ordered by level using this SELECT t1.name AS lev1, t2.name as lev2, t3.name as lev3, t4.name as lev4 FROM category AS t1 LEFT JOIN category AS t2 ON t2.parent = t1.category_id LEFT JOIN category AS t3 ON t3.parent = t2.category_id LEFT JOIN category AS t4 ON t4.parent = t3.category_id WHERE t1.name = 'ELECTRONICS'; ORDER by ..... I want to make an unordered list using php from the table Anyone can help me...

    Read the article

  • Recommendations for C# controls that can be used to create a hierarchical list of prioritised items?

    - by Mendokusai
    I need to be able to display and edit a hierarchical list of tasks in a C# app. It can either be a Windows form app, or ASP.NET. Basically, I want similar behaviour to the way Microsoft Project handles tasks. The control would need to: 1) Maintain a list of items made up of several fields 2) Each item can have a number of children (at least 3 levels of nesting) 3) It needs to be very easy to change the parents/children of an item 4) It needs to be very easy to edit the fields (as fast as changing cells in Excel) 5) It needs to be very easy to reorder the items by dragging and dropping or cut and paste 6) If I can easily connect the control to a database, even better Anyone got any recommendations for controls for me to look at?

    Read the article

  • How can we make a single dimension array to multidimensional Hierarchical ?

    - by Chetan sharma
    I have an single array of Hierarchical categories. Index of the array is the category_id like:: [8846] => Array ( [category_id] => 8846 [title] => Tsting two [description] => Tsting two [subtype] => categories [type] => object [level] => 2 [parent_category] => 8841 [tags] => new [name] => Tsting two ) each value has its parent_category value, I have around 500 elements in the array, what is the best way to make it. Process i followed: krsort categories array, so that all the child categories are at the beginning, then function makeHierarchical() { foreach($this->categories as $guid => $category) { if($category['level'] != 1) $this->multilevel_categories[$category['parent_category']][$guid] = $category; } } but this is not working, it does it only for first level. Can someone point out me the error.

    Read the article

  • How to store Hierarchical K-Means tree for a large number of images, using Opencv?

    - by AquaAsh
    I am trying to make a program that will find similar images from a dataset of images. The steps are 1)extract SURF descriptors for all images 2)store the descriptors 3)Apply knn on the stored descriptors 4)Match the stored descriptors to the query image descriptor using KNN Now each images SURF descriptor will be stored as Hierarchical k means tree, now do I store each tree as a separate file or is it possible to build some sort of single tree with all the images descriptors and updated as images are added to dataset. This is the paper I am basing the program on www.ijest.info/docs/IJEST10-02-03-13.pdf.

    Read the article

  • Localizing a plist with grouped data

    - by Robert Altman
    Is there a way to localize a plist that contain hierarchical or grouped data? For instance, if the plist contains: Book 1 (dictionary) Key (string) Name (string) Description (localizable string) Book 2 (dictionary) Key (string) Name (string) Description (localizable string) (etcetera...) For the sake of the example, the Key and Name should not be translated (and preferably should not be duplicated in multiple localized property lists). Is there a mechanism for providing localizations for the localizable Description field without localizing the entire property list? The only other strategy that came to my mind is to store a lookup key in the description field and than use that to retrieve the localized text via NSLocalizedString(...) Thanks.

    Read the article

  • How to structure data... Sequential or Hierarchical?

    - by Ryan
    I'm going through the exercise of building a CMS that will organize a lot of the common documents that my employer generates each time we get a new sales order. Each new sales order gets a 5 digit number (12222,12223,122224, etc...) but internally we have applied a hierarchy to these numbers: + 121XX |--01 |--02 + 122XX |--22 |--23 |--24 In my table for sales orders, is it better to use the 5 digital number as an ID and populate up or would it be better to use the hierarchical structure that we use when referring to jobs in regular conversation? The only benefit to not populating sequentially seems to be formatting the data later on in my view, but that doesn't sound like a good enough reason to go through the extra work. Thanks

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >