Search Results

Search found 11478 results on 460 pages for 'disk partition'.

Page 326/460 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Is my TFS2010 backup/restore hosed?

    - by bwerks
    Hi all, I recently set up a sandbox TFS to test TFS-specific features without interfering with the production TFS. I was happy I did this sooner than I thought--I hadn't been backing up the encryption key from SSRS and upon restoring the reporting databases, they remained inactive, requiring initialization that could only come from applying the encryption key. Said encryption key was lost when I nuked the partition after backing up the TFS databases. The only option I seemed to have is to delete the encrypted data. I'm fine with this, since there wasn't much in there to begin with, however once they're deleted I'm not quite sure how to configure TFS to recognize a new installation of these services while using the restored versions of everything else. Unfortunately, the TFS help file doesn't seem to account for this state though. Is there a way to essentially rebuild the reporting and analysis databases? Or are they gone forever?

    Read the article

  • DBCC CHECKDB WITH DATA_PURITY gives out of range error

    - by Mark Allison
    Hi there, I have restore a SQL Server 2000 database onto SQL Server 2005 and then run DBCC CHECKDB WITH DATA_PURITY and I get this error: Msg 2570, Level 16, State 3, Line 2 Page (1:19558), slot 13 in object ID 181575685, index ID 1, partition ID 293374720802816, alloc unit ID 11899744092160 (type "In-row data"). Column "NumberOfShares" value is out of range for data type "numeric". Update column to a legal value. The column NumberOfShares is a numeric (19,6) data type. If I run the following select max (NumberOfShares) from AUDIT_Table select min (NumberOfShares) from AUDIT_Table I get: 22678647.839110 -1845953000.000000 These values are inside the bounds of a numeric (19,6) so I'm not sure why the DBCC check fails. Any ideas to find out why it fails? Do I need to use DBCC PAGE? How would you troubleshoot this? Thanks, Mark.

    Read the article

  • How to keep Hibernate mapping use under control as requirements grow

    - by David Plumpton
    I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow. Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle. I'm just wondering if there are recommendations about ways to handle/mitigate this.

    Read the article

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • Amazon EC2 multiple servers share session state

    - by Theofanis Pantelides
    Hi everyone, I have a bunch of EC2 servers that are load balanced. Some of the servers are not sharing session, and users keep getting logged in and out. How can I make all the server share the one session, possibly even using a partitionresolver solution public class PartitionResolver : System.Web.IPartitionResolver { private String[] partitions; public void Initialize() { // create the partition connection string table // web1, web2 partitions = new String[] { "192.168.1.1" }; } public String ResolvePartition(Object key) { String oHost = System.Web.HttpContext.Current.Request.Url.Host.ToLower().Trim(); if (oHost.StartsWith("10.0.0") || oHost.Equals("localhost")) return "tcpip=127.0.0.1:42424"; String sid = (String)key; // hash the incoming session ID into // one of the available partitions Int32 partitionID = Math.Abs(sid.GetHashCode()) % partitions.Length; return ("tcpip=" + partitions[partitionID] + ":42424"); } } -theo

    Read the article

  • memcached cluster maintenance

    - by Yang
    Scaling up memcached to a cluster of shards/partitions requires either distributed routing/partition table maintenance or centralized proxying (and other stuff like detecting failures). What are the popular/typical approaches/systems here? There's software like libketama, which provides consistent hashing, but this is just a client-side library that reacts to messages about node arrivals/departures---do most users just run something like this, plus separate monitoring nodes that, on detecting failures, notify all the libketamas of the departure? I imagine something like this might be sufficient since typical use of memcached as a soft-state cache doesn't require careful attention to consistency, but I'm curious what people do.

    Read the article

  • Why does this bash command take up all space on device?

    - by chelmertz
    Hey! I'm a little new on searching via bash, so feel free to give me suggestions on the methods to use instead of this, which I'll never use again :) I'm searching for occurances of a string, recursively in a directory, with ~50 not-that-large php-files in it; some in current directory, some in directories beneath current dir, three levels of directories down at most. The method I'm using is: find . | xargs grep "module" > module.txt When in simple (one level) directories, this works fine, but in this case, the file became 4 GB large until it filled up all space on the partition :) It wasn't even done yet.. Would someone educate me so I won't embarass myself again?

    Read the article

  • How can I improve the below query?

    - by Newbie
    I have the following input. INPUT: TableA ID Sentences --- ---------- 1 I am a student 2 Have a nice time guys! What I need to do is to extract the words from the sentence(s) and insert each individual word in another table OUTPUT: SentenceID WordOccurance Word ---------- ------------ ----- 1 1 I 1 2 am 1 3 a 1 4 student 2 1 Have 2 2 a 2 3 nice 2 4 time 2 5 guys! I was able to get the answer by using the below query ;With numCTE As ( Select rn = 1 Union all Select rn+1 from numCTE where rn<1000) select SentenceID=id, WordOccurance=row_number()over(partition by TableA.ID order by rn), Word = substring(' '+sentences+' ', rn+1, charindex(' ',' '+sentences+' ', rn+1)-rn-1) from TableA join numCTE on rn <= len(' '+sentences+' ') where substring(' '+sentences+' ', rn,1) = ' ' order by id, rn How can I improve this query of mine.? Basically I am looking for a better solution than the one presented Thanks

    Read the article

  • In a process using lots of memory, how can I spawn a shell without a memory-hungry fork()?

    - by kdt
    On an embedded platform (with no swap partition), I have an application whose main process occupies most of the available physical memory. The problem is that I want to launch an external shell script from my application, but using fork() requires that there be enough memory for 2x my original process before the child process (which will ultimately execl itself to something much smaller) can be created. So is there any way to invoke a shell script from a C program without incurring the memory overhead of a fork()? I've considered workarounds such as having a secondary smaller process which is responsible for creating shells, or having a "watcher" script which I signal by touching a file or somesuch, but I'd much rather have something simpler.

    Read the article

  • Extra space while installing .msi

    - by Cristina
    I have extra space in my custom made .msi. The data to be installed has about 600MB and the installer says it needs 1.4 GB.Switching to a different location then the predetermined one (e.g from C:\Program Files\My_App to F:\My_App) shows that it always need approximately 800 MB on the Windows partition, which is in my case the extra space. Any thoughts on why is this happening? Also can this cause an installation error on a 64-bit OS? I've only installed it on a 32-bit OS and everything is fine, but one of my colleagues is having problems installing it on the above mentioned type.

    Read the article

  • Where are Wireless Profiles stored in Ubuntu [closed]

    - by LonnieBest
    Where does Ubuntu store profiles that allow it to remember the credentials to private wireless networks that it has previously authenticate to and used? I just replaced my Uncle's hard drive with a new one and installed Ubuntu 10.04 on it (he had Ubuntu 9.10 on his old hard drive. He is at my house right now, and I want him to be able to access his private wireless network when he gets home. Usually, when I upgrade Ubuntu, I have his /home directory on another partition, so his wireless profile to his own network persists. However, right now, I'm trying to figure out which .folder I need to copy from his /home/user folder on the old hard drive, to the new hard drive, so that he will be able to have wireless Internet when he gets home. Does anyone know with certainty, exactly which folder I need to copy to the new hard drive to achieve this?

    Read the article

  • grouping objects to achieve a similar mean property for all groups

    - by cytochrome
    I have a collection of objects, each of which has a numerical 'weight'. I would like to create groups of these objects such that each group has approximately the same arithmetic mean of object weights. The groups won't necessarily have the same number of members, but the size of groups will be within one of each other. In terms of numbers, there will be between 50 and 100 objects and the maximum group size will be about 5. Is this a well-known type of problem? It seems a bit like a knapsack or partition problem. Are efficient algorithms known to solve it? As a first step, I created a python script that achieves very crude equivalence of mean weights by sorting the objects by weight, subgrouping these objects, and then distributing a member of each subgroup to one of the final groups. I am comfortable programming in python, so if existing packages or modules exist to achieve part of this functionality, I'd appreciate hearing about them. Thank you for your help and suggestions.

    Read the article

  • query in query builder in a Table Adapter

    - by Sony
    I am working with the datasets of .net I have an Oracle Query which is working fine . but I copy the query as sql statement within Table Adapter wizard and after I clicked the Query Builder button ,there is SQL syntax error. The query is below: SELECT lead_id, NAME, ADDRESS, CITY, EMAIL, PHONE, PINCODE, STATE, QUALIFICATION, DOB, status FROM (SELECT l.lead_id, l.NAME, l.ADDRESS, l.CITY, l.EMAIL, l.PHONE, l.PINCODE, l.STATE, l.QUALIFICATION, l.DOB, CASE WHEN s.status IS NULL THEN 'Not Updated !' ELSE s.status END status, row_number() over(PARTITION BY l.lead_id ORDER BY t .CREATED_DATE DESC) rn FROM LEADS l JOIN Leads lc ON l.USER_ID = lc.USER_ID AND l.USER_ID = :iuser_id AND(l.CREATED_DATE BETWEEN (TO_DATE(:ifrom_date , 'dd-mm-yyyy') ) AND (TO_DATE (:ito_date, 'dd-mm-yyyy' ) )) LEFT JOIN LEADTRANSACTION t ON l.lead_id = t .lead_id LEFT JOIN STATUS s ON s.STATUS_ID = t .STATUS_ID) WHERE rn = 1;

    Read the article

  • Append all logs to /var/log

    - by iCy
    Application scenario: I have the (normal/permanent) /var/log mounted on an encrypted partition (/dev/LVG/log). /dev/LVG/log is not accessible at boot time, it needs to be manually activated later by su from ssh. A RAM drive (using tmpfs) is mounted to /var/log at init time (in rc.local). Once /dev/LVG/log is activated, I need a good way of appending everything in the tmpfs to /dev/LVG/log, before mounting it as /var/log. Any recommendations on what would be a good way of doing so? Thanks in advance!

    Read the article

  • MS Sync framework - Identity crisis resolution by partitioning the primary key.

    - by user326136
    Hello, We implementing offline feature to an existing application. We have implemented the syn with SQL Server internal change tracking and over WCF using MS Sync Framework (http://msdn.microsoft.com/en-us/sync/default.aspx) All of our tables have primary key as integer, we cannot move to GUID. So as you are thinking we will have identity crises between applications. So we decided to go with the way Merge replication does(http://msdn.microsoft.com/en-us/library/aa179416(SQL.80).aspx) partition the primary key range. Below is the example scenario - Server Table A - ID Range - 0 to 100 Client 1 Table A - ID Range - 101 to 200 Client 2 Table A - ID Range - 201 to 300 how to implement this ? i know we can use BCC CHECKIDENT (yourtable, reseed, value) CHECK (([ID]<=(100))) but this does not solve the issue.... Merge replication provides an option of "Not for replication"(http://msdn.microsoft.com/en-us/library/aa237102(SQL.80).aspx) to achieve insert form clients and still maintain the set range.. can i use that somehow here? please help...

    Read the article

  • Which parallel sorting algorithm has the best average case performance?

    - by Craig P. Motlin
    Sorting takes O(n log n) in the serial case. If we have O(n) processors we would hope for a linear speedup. O(log n) parallel algorithms exist but they have a very high constant. They also aren't applicable on commodity hardware which doesn't have anywhere near O(n) processors. With p processors, reasonable algorithms should take O(n/p log n/p) time. In the serial case, quick sort has the best runtime complexity on average. A parallel quick sort algorithm is easy to implement (see here and here). However it doesn't perform well since the very first step is to partition the whole collection on a single core. I have found information on many parallel sort algorithms but so far I have not seen anything pointing to a clear winner. I'm looking to sort lists of 1 million to 100 million elements in a JVM language running on 8 to 32 cores.

    Read the article

  • Where to place web server root?

    - by nefo_x
    Hello everybody, I've just made an upgrade and now partly thinking on web-server directory structure for local workstation for web-development on linux platform. Running multiple hosts and different projects required. Where is it better to put all the server's docroots? /var/www? /srv? /www? I plan to make it as separate partition - could it be good for backups? :) I'm looking forward to your thoughts on this.

    Read the article

  • How do I switch the table that is queried with linq-to-sql

    - by Ian Ringrose
    We have two tables with the same set of columns; depending on the “type” of object the value is stored in one of the two tables. I wish to use common code to access these two tables. If I was using “raw sql” I could just use String.Format() to change the table name. (Likewise for updates etc) The two separate tables are needed as the data access patterns are very different for the common queries on the two tables and therefore different indexes are needed. “Views” and “instead of triggers” etc to make the tables look like a single table are not liked here. A lot of our customers use low end version of SqlServer so we cannot use partition tables.

    Read the article

  • executorservice to read data from database in chuncks and run process on them

    - by TazMan
    I'm trying to write a process that would read data from a database and upload it onto a cloud datastore. How can I decide the partition strategy of the data? I want to query the table in chunks and process each chunk in 10 threads. Each thread basically will send the data to an individual node on a 10 node cluster on the cloud.. Where in the below multi threading code will the dataquery to extract and send 10 concurrent requests for uploading data to cloud would be? public class Caller { public static void main(String[] args) { ExecutorService executor = Executors.newFixedThreadPool(10); for (int i = 0; i < 10; i++) { Runnable worker = new DomainCDCProcessor(i); executor.execute(worker); } executor.shutdown(); while (!executor.isTerminated()) { } System.out.println("Finished all threads"); } }

    Read the article

  • Run a proc on several different values of one parameter

    - by WEFX
    I have the following query that gets run within a proc. The function MyFunction returns a table, and this query joins on that table. This proc works great when a @MyArg value is supplied. However, I’m wondering if there’s a way to run this on all @MyArg values in the database. I’m sure there’s a way to do it within a loop, but I know that loops are generally to be avoided at the db layer. I really just need to perform this for the sake of checking (and possibly cleansing) some bad data. SELECT ColumnA, ColumnB, ColumnC FROM ( SELECT a.ColumnA, a.ColumnB, a.ColumnC, ROW_NUMBER() over(partition by a.ColumnD order by f.ColumnX) as RowNum FROM dbo.MyTableA AS a INNER JOIN dbo.MyFunction(@MyArg) f ON f.myID = a.myID WHERE (a.myBit = 1 OR a.myID = @MyArg) ) AS x WHERE x.rownum = 1;

    Read the article

  • Error occurs while using SPADE method in R

    - by Yuwon Lee
    I'm currently mining sequence patterns using SPADE algorithm in R. SPADE is included in "arulesSequence" package of R. I'm running R on my CentOS 6.3 64bit. For an exercise, I've tried an example presented in http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Sequence_Mining/SPADE When I tried to do "cspade(x, parameter = list(support = 0.4), control = list(verbose = TRUE))" R says: parameter specification: support : 0.4 maxsize : 10 maxlen : 10 algorithmic control: bfstype : FALSE verbose : TRUE summary : FALSE preprocessing ... 1 partition(s), 0 MB [0.096s] mining transactions ... 0 MB [0.066s] reading sequences ...Error in asMethod(object) : 's' is not an integer vector When I try to run SPADE on my Window 7 32bit, it runs well without any error. Does anybody know why such errors occur?

    Read the article

  • TCP Scanner Python MultiThreaded

    - by user1473508
    I'm trying to build a small tcp scanner for a netmask. The code is as follow: import socket,sys,re,struct from socket import * host = sys.argv[1] def RunScanner(host): s = socket(AF_INET, SOCK_STREAM) s.connect((host,80)) s.settimeout(0.1) String = "GET / HTTP/1.0" s.send(String) data = s.recv(1024) if data: print "host: %s have port 80 open"%(host) Slash = re.search("/", str(host)) if Slash : netR,_,Wholemask = host.partition('/') Wholemask = int(Wholemask) netR = struct.unpack("!L",inet_aton(netR))[0] for host in (inet_ntoa(struct.pack("!L", netR+n)) for n in range(0, 1<<32-Wholemask)): try: print "Doing host",host RunScanner(host) except: pass else: RunScanner(host) To launch : python script.py 10.50.23.0/24 The problem I'm having is that even with a ridiculous low settimeout value set, it takes ages to cover the 255 ip addresses since most of them are not assigned to a machine. How can i make a way faster scanner that wont get stuck if the port is close.MultiThreading ? Thanks !

    Read the article

  • Spreading dynamic with community structure

    - by YogurtFruit
    I have a data set which I hope to simulate the spreading dynamic with community structure. The steps I follow is import the data to a complex network with Networkx partition the network into some modules which are known as communities simulate the SIS model and draw plots with and without communities. Something confused me between step 2 and step 3. After partitioning, I get some communities which contains nodes number. The community numbers and nodes numbers are the only input to step 3, and how I simulate SIS with and without communities?

    Read the article

  • Help needed in pivoting (SQL Server 2005)

    - by Newbie
    I have a table like ID Grps Vals --- ---- ----- 1 1 1 1 1 3 1 1 45 1 2 23 1 2 34 1 2 66 1 3 10 1 3 17 1 3 77 2 1 144 2 1 344 2 1 555 2 2 11 2 2 22 2 2 33 2 3 55 2 3 67 2 3 77 The desired output being ID Record1 Record2 Record3 --- ------- ------- ------- 1 1 23 10 1 3 34 17 1 45 66 77 2 144 11 55 2 344 22 67 2 555 33 77 I have tried(using while loop) but the program is running slow. I have been asked to do so by using SET based approach. My approach so far is SELECT ID,[1] AS [Record1], [2] AS [Record2], [3] as [Record3] FROM ( Select Row_Number() Over(Partition By ID Order By Vals) records ,* From myTable)x PIVOT (MAX(vals) FOR Grps IN ([1],[2],[3])) p But it is not working. Can any one please help to solve this.(SQL SERVER 2005)

    Read the article

  • Boot from USB on MediaSmart EX485

    - by Matt Hanson
    I have an HP MediaSmart EX485. I'm attempting to install Vail on it with a USB flash drive and this guide: http://www.mediasmartserver.net/2010/04/26/how-to-install-windows-home-server-vail-on-the-hp-mediasmart-server/. I'm having issues getting the server to boot from the USB flash drive. The MediaSmart itself being headless doesn't help matters. The flash drive has an LED on it for disk activity, and I'm able to see that it's detected when the server is powered on, but it's definitely not booting from it. Any ideas?

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >