Search Results

Search found 69138 results on 2766 pages for 'oracle data mining'.

Page 478/2766 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Delete data from a SQL Server database on a full partition

    - by aleroot
    I have a SQL Server 2005 Database on a dedicated partition, during the time the database grown and now it have occupied all the space on the partition, now the problem is that the only operation I can do on the database is detach, but i want to remove old data from some tables to save space ... How can I remove old data from the database if SQL Server interface doesn't allow to run queries on it ?

    Read the article

  • Oracle : How can i find the holiday in a business day?

    - by Rajesh Kumar G
    Hi, Consider there are 3 different centers across the country,which have their different holidays schedule.Now i want to find that the current date is the business day or not(eliminate saturaday ,sunday and Holidays). tell me,Which one is feasible? Can i store the details of the holiday with description in 3 separate tables for 3 different centers or in a 3 separate file? is it possible to read the file using PL\SQl?

    Read the article

  • basic JSON pulling data help..

    - by Webby
    New to json data and struggling i guess the answer is real easy but been bugging me for the last hour.. Sample data { "data": { "userid": "17", "dates": { "timestame": "1275528578", }, "username": "harino54", } } Ok I can pull userid or username easy enough with echo "$t->userid" or echo "$t->username " but how do I pull data from the brackets within ? in this case timestame? cant seem to figure it out.. any ideas?

    Read the article

  • Recover data from a Windows Dynamic Volume Spanned Disk

    - by iCe
    I have a dynamic volume created with two spanned partitions over two disk. Recently, one disk has started failing, and I want to copy the data inside that disk to another disk, before replacing it. However I don't know how to select only what is inside the failing disk, because the partitions spans across both disks. Maybe imaging the entire disk should do the work? Or I have to copy all the data from both disks? Thanks in advance!

    Read the article

  • Change a Munin server and keep the data

    - by Khelben
    We are migrating some servers, and we need tp change our Munin server. Most of the Munin nodes are not changed, and we would want to keep track of the historical data, if possible. I can set up a new Munin server, but I like to know if it's possible to transfer the old data to the new server, and how to do it.

    Read the article

  • help me to choose the best soulotion for my purpose to build my software.

    - by rima
    before answer me plz thinking about the futures of these kind of program and answer me plz. I wanna get some data from oracle server like: 1-get all the function,package,procedure and etc for showing them or drop them & etc... 2-compile my *.sql files,get the result if they have problem & etc... becuz I was beginner in oracle first of all I for solve the second problem I try to connect to sqlPlus by RUN sqlplus and trace the output(I mean,I change the output stream of shell and trace what happend and handle the assigned message to customer. NOW THIS PART SUCCEED. just a little bit I have problem with get all result because the output is asynchronous.any way... [in this case I log in to oracle Server by send argument to the sqlplus by make a process in c#] after that I try to get all function,package or procedure name,but I have problem in speed!so I try to use oracle.DataAccess.dll to connect the database. now I m so confusing about: which way is correct way to build a program that work like Oracle Developer! I do not have any experience for like these program how work. If Your answer is I must use the second way follow this part plz: I search a little bit the Golden,PLedit (Benthic software),I have little bit problem how I must create the connection string?because I thinking about how I can find the host name or port number that oracle work on them?? am I need read the TNSNames.Ora file? IF your answer is I must use the first way follow this part plz: do u have any Idea for how I parse the output?because for example the result of a table is so confusing...[i can handle & program it but I really need someone experience,because the important things to me learn how such software work so nice and with quick response?] All of the has different style in output... If you are not sure Can u help me which book can help me in this way i become expert? becuz for example all the C# write just about how u can connect to DB and the DB books write how u can use this DB program,I looking for a book that give me some Idea how develop an interface for do transaction between these two.not simple send and receive data,for example how write a compiler for them. the language of book is not different for me i know C#,java,VB,sql,Oracle Thanks.

    Read the article

  • Extending hard disk size after hard drive regrow without losing data

    - by Albert Widjaja
    Hi All, I wonder if this is possible to extend or regrow the Linux hard disk partition from 8 GB to 20 GB without losing the existing data on the partition ? at the moment this Ubuntu Linux is deployed on top of VMware and I've just regrow the hard drive from 8 GB into 20 GB but can't see the effect immediately. can anyone suggest how to do this without losing the data ? and I found some strange error message when i do the fdisk -l ?

    Read the article

  • How should I join these 3 SQL queries in Oracle?

    - by Nazgulled
    I have these 3 queries: SELECT title, year, MovieGenres(m.mid) genres, MovieDirectors(m.mid) directors, MovieWriters(m.mid) writers, synopsis, poster_url FROM movies m WHERE m.mid = 1; SELECT AVG(rating) FROM movie_ratings WHERE mid = 1; SELECT COUNT(rating) FROM movie_ratings WHERE mid = 1; And I need to join them into a single query. I was able to do it like this: SELECT title, year, MovieGenres(m.mid) genres, MovieDirectors(m.mid) directors, MovieWriters(m.mid) writers, synopsis, poster_url, AVG(rating) average, COUNT(rating) count FROM movies m INNER JOIN movie_ratings mr ON m.mid = mr.mid WHERE m.mid = 1 GROUP BY title, year, MovieGenres(m.mid), MovieDirectors(m.mid), MovieWriters(m.mid), synopsis, poster_url; But I don't really like that "huge" GROUP BY, is there a simpler way to do it?

    Read the article

  • Extracting data from Visual FoxPro databases

    - by whitequark
    I just got some 20Gb of data in a Visual FoxPro database with a custom frontend probably written in the same framework, and need to extract that data in any well-known format. I don't know anything about VFP in particular, but as it is SQL, there should be a way of opening an SQL console, or maybe an vfpdump utility. How can I do that? Everything I have now are a bunch of obscure binary files and a frontend executable.

    Read the article

  • Ubuntu on Oracle VirtualBox: Shared folders

    - by Rosarch
    I looked at this question, but it didn't help. I'm running Windows 7 as a host with Ubuntu 10.10 as a guest with VBox 4.0. I want to have a shared directory between the two. I have installed Guest Additions. I went to the VBox control panel in Windows, added a Shared Folder (sharename Shared_Folder), and chose "Auto Mount". A directory named "sf_Shared_Folder" appeared in /media on Ubuntu, but when I put files in that directory from an OS, I can't see them on the other one. I then tried to create a directory without automounting (sharename collectivefiles), and to run the following command: foo@foo-VirtualBox:~$ sudo mount -t vboxsf collectivefiles FileShare /sbin/mount.vboxsf: mounting failed with the error: No such device What is causing this error? I rebooted both the VM and VBox itself, but I'm still observing this.

    Read the article

  • Encrypt Data Prior to Upload

    - by TheW
    I'm looking to store some data online but I want to encrypt the files first. Since I understand that sFTP will only encrypt the transmission of the data, I'm wondering what program others use to encrypt their files prior to sFTPing them to a backup server. Thanks.

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • how to work with datagridview if need show many columns data (approx 1Mio)

    - by ruprog
    is a problem to display data in Datagridview. A large amount of data (stock quotes) data to be displayed from left to right Tell me what to do to display an array of data in datagridviev Public dat As New List(Of act) Public Class act Public time As Date Public price As Integer End Class Sub work() Dim r As New Random For x As Integer = 0 To 1000000 Dim el As New act el.time = Now el.price = r.Next(0, 1000) dat.Add(New act) Next End Sub

    Read the article

  • Removing raid 1 (mirroring) and leaving data on both drives

    - by ajma
    Hello, I have two drives in a raid 1 (mirroring) array. Hardware raid using whatever is built into an Intel motherboard. (asus P5BE) I'd like to remove one drive but keep the data in both (I want to put one of the drives into another machine). Can I go into the raid configuration and remove the array and have the data remain?

    Read the article

  • Various way to send data to the web server

    - by Webrsk
    Client Environment : Windows XP , Internet connection Available, PHP Not installed. Server Environment : CentOS , Internet connection Available, PHP , MYsql installed. Data are stored in files at client machine , suggest better ways to send data fetched from the file to the server. Normally i would be using HTTP request using Curl to send the data to the server but client machine doesnt have php installed. What all are the ways to send data to the server and the comparisons?

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • Does Oracle re-hash the driving table for each join on the same table columns?

    - by thecoop
    Say you've got the following query on 9i: SELECT /*+ USE_HASH(t2 t3) */ * FROM table1 t1 -- this has lots of rows LEFT JOIN table2 t2 ON t1.col1 = t2.col1 AND t1.col2 = t2.col2 LEFT JOIN table3 t3 ON t1.col1 = t3.col1 AND t1.col2 = t3.col2 Due to 9i not having RIGHT OUTER HASH JOIN, it needs to hash table1 for both joins. Does it re-hash table1 between joining t2 and t3 (even though it's using the same join columns), or does it keep the same hash information for both joins?

    Read the article

  • Expose JSON as queryable for jQuery

    - by Ted
    I am trying to expose some data, user names, as json format on my server. I want to use jQuery.getJSOn() method to query data. I can get my data converted to json with newtonsoft.dll on server and save it in a file. But as far as I know it is not queryable. I want something like http://search.twitter.com/search.json?callback=?&q=abc Can anyone help me out to expose my data ion the above format.

    Read the article

  • cURL Upload file AND send POST data

    - by kisplit
    Hello, I have a web server running some PHP that checks for an image (curl -F 'imageName=@myimage') and it also checks the POST data for username=&password=. When the PHP checks _REQUEST I can just do: curl -F 'imageName=@myimage' 'http://www.example.com/?upload=1&username=test&password=test' I need to instead check _POST for username and password due to specs. How can I upload the image and have the username=&password= post data? Any help appreciated!

    Read the article

  • Using jQuery to perform a GET request and using the resulting data

    - by Filip Ekberg
    I have a page, let's call it "callme.html" which only has this content: abc Now I want to fire the following: $.get("callme.html", function (data) { alert(data); }, "text"); I am using jQuery 1.4.2 mini and the page is called but the alert is empty. Any ideas why? I'd like the popup to contain abc I've also tried the following $.ajax({ url: "callme.html", async: false, success: function (data) { alert(data); } });

    Read the article

  • data recovery after windows format with ubuntu 10.10

    - by mathew
    Hello, I had a system running Win 7 Home premium and ubuntu 10.04, side by side dual boot.I got an ubuntu 10.10 image disk so I decided to update.But during the installation I think I made a mistake by specifying the whole partition, and after installation of ubuntu 10.10 I saw that my windows and all the other data was gone.there was around 250GB of it.Is there any way I can recover this data??I had a lot of irreplacable photos and collections on the drive.I do have a recovery cd for my windows, but it does not detect any windows os.thanku very much. [email protected]

    Read the article

  • Linux data storage and partitioning

    - by Rajeev
    In the following output of df -h you can see that i have added a new hard drive(/dev/hdd1) and have mounted as /hdd1. My question is if I start dumping data to /opt will that data be mounted in /hdd1 or / My goal is to utilise the new hdd1 instead of old disk(/dev/sda3). How can this be done? Filesystem Size Used Avail Use% Mounted on /dev/sda3 442G 312G 12G 86% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 57M 128M 31% /boot /dev/sdb1 1.7T 201M 2.6T 1% /hdd1

    Read the article

  • iPhone - NSURLConnection does not receive data

    - by Jukurrpa
    Hi, I have a pretty weird problem with NSURLRequest. I'm using them to make an asynchronous image loading in an UITableView. The first time the tableView displays, all connections from NSURLRequests open correctly but receive absolutely no data, regardless of how long I wait. But as soon as I scroll down in the tableView, the newly created requests for the new cells work perfectly! The only way for the images on top of the tableView to load is to make them disappear by scrolling down and then up again, in order to create new requests. Here is what I do in "cellForRowAtIndexPath": UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWIthFrame:CGRectMake(0, 0, 300, 60)]; AsyncUIImageView imageView = [[AsynUIImageView alloc] initWithFrame:CGRectMake(0, 0, 60, 60)]; imageView.tag = IMG_VIEW // an enum for tags [cell addSubView:imageView]; [imageView release]; } AsyncUIImageView imageView = (AsyncUIImageView*)[cell viewWithTag:IMG_VIEW]; // I do a few cache checks here, but if the image aint cached I do this: [imageView loadImageFromURL:@"http://someurl.com/somepix.jpg"]; // all urls are different, just an example The AsyncUIImageView inherits from UIImageView and contains an NSURLConnection which opens upon calling the loadImageFromURL method: (void) loadImageFromURL:(NSString*)filename { if (self.connection != nil) [self.connection release]; if (self.data != nil) [self.data release]; NSURLRequest* request = [NSURLRequest requestWithURL:[[NSURL alloc] initWithString:fileName] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0]; self.connection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (self.connection == nil) return; self.data = [[NSMutableData data] retain]; } I've created the delegate methods "connection: didReceiveData", which appends received data to self.data and "connectionDidFinishLoading" which sets the image and closes the connection once the transfer is complete. These work, but are never called for the first requests I create. I suspect this bug to come from the main thread not giving the first requests the control so they can execute themselves, as the same behavior happens if I keep my finger on the screen after a scroll: connections open themselves, but no data is received until I stop touching the screen. What am I doing wrong?

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >