Search Results

Search found 108959 results on 4359 pages for 'ado net data services'.

Page 209/4359 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • post image and other data using mulipart form data in iphone

    - by abdulsamad
    Hi all I am sending some data and and an image to the server using multipart/form-data in objective C. kindly give me some Php code that how can i save the image on the server i am able to get the other variables on the server that i am passing with the image. kindly see my obj C code and php and tell me where i am wrong. your help will be highly appreciated. here i make the POST request. ////////////////////// NSString *stringBoundary, *contentType, *baseURLString, *urlString; NSData *imageData; NSURL *url; NSMutableURLRequest *urlRequest; NSMutableData *postBody; // Create POST request from message, imageData, username and password baseURLString = @"http://localhost:8888/Test.php"; urlString = [NSString stringWithFormat:@"%@", baseURLString]; url = [NSURL URLWithString:urlString]; urlRequest = [[[NSMutableURLRequest alloc] initWithURL:url] autorelease]; [urlRequest setHTTPMethod:@"POST"]; // Set the params NSString *path = [[NSBundle mainBundle] pathForResource:@"LibraryIcon" ofType:@"png"]; imageData = [[NSData alloc] initWithContentsOfFile:path]; // Setup POST body stringBoundary = [NSString stringWithString:@"0xKhTmLbOuNdArY"]; contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@", stringBoundary]; [urlRequest addValue:contentType forHTTPHeaderField:@"Content-Type"]; // Setting up the POST request's multipart/form-data body postBody = [NSMutableData data]; [postBody appendData:[[NSString stringWithFormat:@"\r\n\r\n--%@\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:@"Content-Disposition: form-data; name=\"source\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:@"lighttable"] dataUsingEncoding:NSUTF8StringEncoding]]; // So Light Table show up as source in Twitter post [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:@"Content-Disposition: form-data; name=\"title\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:book.title] dataUsingEncoding:NSUTF8StringEncoding]]; // title [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:@"Content-Disposition: form-data; name=\"isbn\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:book.isbn] dataUsingEncoding:NSUTF8StringEncoding]]; // isbn [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:@"Content-Disposition: form-data; name=\"price\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:txtPrice.text] dataUsingEncoding:NSUTF8StringEncoding]]; // Price [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:@"Content-Disposition: form-data; name=\"condition\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithString:txtCondition.text] dataUsingEncoding:NSUTF8StringEncoding]]; // Price NSString *imageFileName = [NSString stringWithFormat:@"photo.jpeg"]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"upload\"; filename=\"%@\"\r\n",imageFileName] dataUsingEncoding:NSUTF8StringEncoding]]; //[postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"upload\"\r\n\n\n"]dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[@"Content-Type: image/jpeg\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:imageData]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; // [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n", stringBoundary] dataUsingEncoding:NSUTF8StringEncoding]]; NSLog(@"postBody=%@", [[NSString alloc] initWithData:postBody encoding:NSASCIIStringEncoding]); [urlRequest setHTTPBody:postBody]; NSLog(@"Image data=%@",[[NSString alloc] initWithData:imageData encoding:NSASCIIStringEncoding]); // Spawn a new thread so the UI isn't blocked while we're uploading the image [NSThread detachNewThreadSelector:@selector(uploadingDataWithURLRequest:) toTarget:self withObject:urlRequest]; I the method uploadingDataWithURLRequest i post the request to the server... Here is my php Code ?php $title = $_POST['title']; $isbn = $_POST['isbn']; $price = $_POST['price']; $condition = $_POST['condition']; $image=$_FILES['image']['name']; if($image) { $filename = 'newimage.jpeg'; file_put_contents($filename, $image); echo "image is there"; } else { echo "image is nil"; } ?> I am unable to get the image on server kindly help me where i am wrong.

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • Connect to a remote Oracle 11g server using OracleClient of .NET 2.0

    - by Raghu M
    I have to connect to a Oracle server on the network using a .NET / C# (Winform) application. I am trying to use System.Data.OracleClient but in vain. Here are the details I can possibly think of (that might help someone reading this question): Platform: Visual Studio 2005 / .NET 2.0 with C# on Windows Vista Home Premium Library: System.Data.OracleClient Server: Oracle 11g (located on the same LAN) Please note that I don't have Oracle installed locally and I have hunted every discussion forum possible for help - but most of them assume local Oracle installation! Here is my connection string: "User Id=TSUSER;Password=ts12TS;Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MyServerIP)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL)));" And I get this error: OCIEnvCreate failed with return code -1 but error message text was not available. Stack trace: at System.Data.OracleClient.OciHandle..ctor(OciHandle parentHandle, HTYPE handleType, MODE ocimode, HANDLEFLAG handleflags) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor(OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at DGKit.Util.DataUtil.Generate() in D:\SVNRoot\sandbox\DGDev\Util\DataUtil.cs:line 68

    Read the article

  • Unable to access SQL reporting services on shared site with Themes enabled

    - by Grant
    Hi, i am having some trouble with my IIS web server & SQL reporting services. At the current time my site is playing host to both reporting services (/reports & /reportserver) as well as my personal website (domain.com) Only just recently have i implemented a Theme on my site and as such i have placed a statement in my web.config file directing it to apply a certain theme in the following manner <pages styleSheetTheme="General"> Because of this when i try to access the report pages it failed telling me it couldnt find the Theme so what i did was locate the source files for the /reports & /reportserver directories and place the App_Theme folder in them hoping that would sort everything out. What i am getting now is the following error *Using themed css files requires a header control on the page. e.g. head runat="server" * Does anyone know how i can get around this? Do i have to hack the sql reporting aspx pages? Please note i do NOT want to remove the web.config declaration.

    Read the article

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Why would 70-persistent-net.rules have no effect?

    - by Wes Felter
    I've got a saucy server with a lot of NICs and they end up with weird names like "rename19". I know interface names can be changed by modifying the /etc/udev/rules.d/70-persistent-net.rules file. The first clue that something is wrong is that that file did not exist even though it's supposed to be created automatically. So I decided to write my own based on advice from Linux From Scratch: ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.0", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.1", NAME="eth1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.2", NAME="eth2" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.3", NAME="eth3" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.0", NAME="mezz0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.1", NAME="mezz1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.0", NAME="slot1a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.1", NAME="slot1b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.0", NAME="slot2a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.1", NAME="slot2b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.0", NAME="slot3a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.1", NAME="slot3b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.0", NAME="slot4a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.1", NAME="slot4b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.0", NAME="slot5a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.1", NAME="slot5b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.0", NAME="slot6a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.1", NAME="slot6b" (I'm matching on PCI IDs instead of MAC addresses because I have multiple identical machines that I want to apply this configuration to.) After rebooting, nothing has changed. It's like these rules aren't even being read. There's not much going on in dmesg either: $ dmesg | grep udev [ 3.196629] systemd-udevd[323]: starting version 204 [ 6.719140] systemd-udevd[550]: starting version 204 [ 38.695050] init: udev-fallback-graphics main process (1658) terminated with status 1

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • Tools for displaying a multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Creating a test database with copied data *and* its own data

    - by Jordan Reiter
    I'd like to create a test database that each day is refreshed with data from the production database. BUT, I'd like to be able to create records in the test database and retain them rather than having them be overwritten. I'm wondering if there is a simple straightforward way to do this. Both databases run on the same server, so apparently that rules out replication? For clarification, here is what I would like to happen: Test database is created with production data I create some test records that I want to keep running on the test server (basically so I can have example records that I can play with) Next day, the database is completely refreshed, but the records I created that day are retained. Records that were untouched that day are replaced with records from the production database. The complication is if a record in the production database is deleted, I want it to be deleted on the test database too, so I do want to get rid of records in the test database that no longer exist in the production database, unless those records were created within the test database. Seems like the only way to do this would be to have some sort of table storing metadata about the records being created? So for example, something like this: CREATE TABLE MetaDataRecords ( id integer not null primary key auto_increment, tablename varchar(100), action char(1), pk varchar(100) ); DELETE FROM testdb.users WHERE NOT EXISTS (SELECT * from proddb.users WHERE proddb.users.id=testdb.users.id) AND NOT EXISTS (SELECT * from testdb.MetaDataRecords WHERE testdb.MetaDataRecords.pk=testdb.users.pk AND testdb.MetaDataRecords.action='C' AND testdb.MetaDataRecords.tablename='users' );

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Import data in Excel that doesn't have a row delimiter, but number of columns is known

    - by Alex B
    So i have this text file that looks something like this: Header1 Header2 Header3 Header4 A1 B1 C1 D1 A2 B2 C2 D2 and so on. When imported, I'd want the data to format itself in 4 columns. I tried the Get External Data from Text, and it successfully imports it, but it doesn't wrap it around, so it just keeps making columns for every space. I'd want it to go on the next line after 4 (in this case) elements have been added. What's the simplest way to achieve this? EDIT: My answer follows, since I'm not yet allowed to answer my own questions yet. The Excel function I needed is called indirect(). Not sure how it actually works though, so hopefully someone can help out with that, but the function call that worked for me is =INDIRECT(ADDRESS((ROW(A1)-1)*4+COLUMN(A1),1)) which i found over here: http://www.ozgrid.com/forum/showthread.php?t=101584&p=456031#post456031 Note: this required me to add the text to excel where i'd get this row full of columns, and then flip it so that i'd have a column full of rows.

    Read the article

  • Recovering data from an external hard drive

    - by CCallaghan
    I have a WD Elements 2GB hard drive (formatted NTFS). I accidentally kicked out the USB cable while writing data to the disk, and now I can't access most of the data. Although this was ostensibly my backup drive, there is a great deal of important material on there which was only on there. I realise how idiotic this makes me. (So, formatting is not an option.) Things I've tried/information I've gathered: Windows Explorer will recognise the drive itself. However, it will not access most directories therein (and will sometimes crash when exploring). I can access all of the directories through the command line, but the dir command will often report that it can't read any files in most of the directories. The situation was similar when I hooked it up to an Ubuntu machine: the file explorer crashed, but I could access directories - but not files in those directories - via terminal commands. Several files I tried to copy out either resulted in an I/O error being reported or resulted in the command line crashing. The Disk Management utility on Windows reports a healthy disk formatted as NTFS and not RAW. It also indicates the correct amount of space used up and its capacity (so it seems that the files are not deleted). I've tried to run chkdsk, but that hangs on Step 2 (checking indexes) at 74%. Step 1 reported no bad sectors. I tried Recuva, but that didn't seem to work (stalled at 0% for half an hour). I should also note that the disk doesn't seem to be spinning smoothly; it seems to be chopping back, like it's reading the same sector over and over again. I noticed this after I kicked out the cable. Any help would be greatly appreciated. Update: It would seem the problem has taken a turn for the worse. The external hard drive now shows up on my computer as a local disk and is not mountable by Linux.

    Read the article

  • Data recovery on working hard drive

    - by emgee
    So I have a 5 bay hot swap SATA enclosure that's connected to a Silicon Image-based SATA adapter in a computer. It's running XP Pro. There are two 1.5TB hard drives in slots 1 and 2 respectively, set up using RAID 1 using the the Silicon Image utility. There are also two 1TB drives in bays 3 and 4, also set to RAID 1 the same way. The partitions for both RAID arrays are Dynamic partitions. A few days back, there was a bare hard drive that needed some files copied off of, so it was popped it in bay 5, that bay to pass-through, and the copied data off of it. Later, I noticed that my 1.5TB drives no longer showed up in windows. In the Silicon Image utility, the drives showed up fine, no error. However, in Device Manager, it shows the RAID 1 array as uninitialized. It shows up as the right size, etc., but nothing else. There's no sign of anything wrong with either drive, so I'm not sure what happened exactly. I'm not the only one who has access to that computer, so it is possible there is something else done to it that I don't know of. There's quite a lot of data on it still, and if at all possible, I'd prefer to not send it to Ontrack. Does anyone know of software that would restore the partitions, keeping in mind that it's a Windows LDM partition? I have access to a variety of Operating Systems, so something that would work on Mac, Windows or Linux would be acceptable. The programs I usually use are not compatible with LDM.

    Read the article

  • Easily Plotting Multiple Data Series in Excel

    - by John
    I really need help figuring out how to speed up graphing multiple series on a graph. I have seperate devices that give monthly readings for several variables like pressure, temperature, and salinity. Each of these variables is going to be its own graph with devices being the series. My x-axis is going to be the dates that these values were taken. The problem is that it takes ages to do this for each spreadsheet since I have monthly dates from 1950 up to the present and I have about 50 devices in each spreadsheet. I also have graphs for calculated values that are in columns next to them. Each of these devices is going to become a data series in the graph. E.g. In one of my graphs I have all the pressures from the devices and each of the data series' names is the name of the device. I want a fast way to do this. Doing this manually is taking a very long time. Please help! Is there any easier way to do this? It is consistent and the dates all line up. I am just repeating the same clicks over and over again Thank you!

    Read the article

  • Getting started with Exchange Web Services 2010

    - by Adam Tuttle
    I've been tasked with writing a SOAP web-service in .Net to be middleware between EWS2010 and an application server that previously used WebDAV to connect to Exchange. (As I understand it, WebDAV is going away with EWS2010, so the application server will no longer be able to connect as it previously did, and it is exponentially harder to connect to EWS without WebDAV. The theory is that doing it in .Net should be easier than anything else... Right?!) My end goal is to be able to get and create/update email, calendar items, contacts, and to-do list items for a specified Exchange account. (Deleting is not currently necessary, but I may build it in for future consideration, if it's easy enough). I was originally given some sample code, which did in fact work, but I quickly realized that it was outdated. The types and classes used appear nowhere in the current documentation. For example, the method used to create a connection to the Exchange server was: ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); For what it's worth, this was using an assembly that came with the sample code: Microsoft.Exchange.WebServices.dll ("MEWS"). Before I realized that this wasn't the current standard way to accomplish the connection, and it worked, I tried to build on it and add a method to create calendar items, which I copied from here: static void CreateAppointment(ExchangeServiceBinding esb) { // Create the appointment. CalendarItemType appointment = new CalendarItemType(); ... } Right away, I'm confronted with the difference between ExchangeService and ExchangeServiceBinding ("ESB"); so I started Googling to try and figure out how to get an ESB definition so that the CreateAppointment method will compile. I found this blog post that explains how to generate a proxy class from a WSDL, which I did. Unfortunately, this caused some conflicts where types that were defined in the original Assembly, Microsoft.Exchange.WebServices.dll (that came with the sample code) overlapped with Types in my new EWS.dll assembly (which I compiled from the code generated from the services.wsdl provided by the Exchange server). I excluded the MEWS assembly, which only made things worse. I went from a handful of errors and warnings to 25 errors and 2,510 warnings. All kinds of types and methods were not found. Something is clearly wrong, here. So I went back on the hunt. I found instructions on adding service references and web references (i.e. the extra steps it takes in VS2008), and I think I'm back on the right track. I removed (actually, for now, just excluded) all previous assemblies I had been trying; and I added a service reference for https://my.exchange-server.com/ews/services.wsdl Now I'm down to just 1 error and 1 warning. Warning: The element 'transport' cannot contain child element 'extendedProtectionPolicy' because the parent element's content model is empty. This is in reference to a change that was made to web.config when I added the service reference; and I just found a fix for that here on SO. I've commented that section out as indicated, and it did make the warning go away, so woot for that. The error hasn't been so easy to get around, though: Error: The type or namespace name 'ExchangeService' could not be found (are you missing a using directive or an assembly reference?) This is in reference to the function I was using to create the EWS connection, called by each of the web methods: private ExchangeService getService(String AutoDiscoverEmailAddress, String AuthEmailAddress, String AuthEmailPassword) { ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); return svc; } This function worked perfectly with the MEWS assembly from the sample code, but the ExchangeService type is no longer available. (Nor is ExchangeServiceBinding, that was the first thing I checked.) At this point, since I'm not following any directions from the documentation (I couldn't find anywhere in the documentation that said to add a service reference to your Exchange server's services.wsdl -- but that does seem to be the best/farthest I've gotten so far), I feel like I'm flying blind. I know I need to figure out whatever it is that should replace ExchangeService / ExchangeServiceBinding, implement that, and then work through whatever errors crop up as a result of that switch... But I have no idea how to do that, or where to look for how to do it. Googling "ExchangeService" and "ExchangeServiceBinding" only seem to lead back to outdated blog posts and MSDN, neither of which has proven terribly helpful thus far. Help me, Obi-Wan, you're my only hope!

    Read the article

  • Core Data Migration - "Can't add source store" error

    - by Tofrizer
    Hi, In my iPhone app I'm using Core Data and I've made changes to my data model that cannot be automatically migrated over (i.e. added new relationships). I added the data model version (Design - Data Model - Add Model Version) and applied my new data model changes to the new version 2. I then created a mapping object model and set the Source and Destination models to their correct data models (old and new respectively). When I run the app and call the persistentStoreCoordinator, my app barfs with the following: 2010-02-27 02:40:30.922 XXXX[73578:20b] Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0xfc2240 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=134130 UserInfo=0xfbb3a0 "Operation could not be completed. (Cocoa error 134130.)"; reason = "Can't add source store"; } FWIW (not much i think) I've also made the usual code changes in persistentStoreCoordinator to use the NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption (for future data model changes that can be automatically migrated). More relevantly, my managedObjectModel is created by calling initWithContentsOfURL where the file/resource type is "momd". I've tried updating both the source and destination model in the mapping model (Design - Mapping Model - Update XXX Model) as well as deleted the mapping model and recreated it. I've cleaned and re-built but all to no avail. I still get the above error message. Any pointers/thoughts on how I can further debug or resolve this problem please? I haven't posted any code snippets because this feels much more like a build environment issue (and my code is very standard - just the usual core data code to handle migrations using a mapping model but I'm happy to show the code if it helps). Appreciate any help. Thanks

    Read the article

  • European Interoperability Framework - a new beginning?

    - by trond-arne.undheim
    The most controversial document in the history of the European Commission's IT policy is out. EIF is here, wrapped in the Communication "Towards interoperability for European public services", and including the new feature European Interoperability Strategy (EIS), arguably a higher strategic take on the same topic. Leaving EIS aside for a moment, the EIF controversy has been around IPR, defining open standards and about the proper terminology around standardization deliverables. Today, as the document finally emerges, what is the verdict? First of all, to be fair to those among you who do not spend your lives in the intricate labyrinths of Commission IT policy documents on interoperability, let's define what we are talking about. According to the Communication: "An interoperability framework is an agreed approach to interoperability for organisations that want to collaborate to provide joint delivery of public services. Within its scope of applicability, it specifies common elements such as vocabulary, concepts, principles, policies, guidelines, recommendations, standards, specifications and practices." The Good - EIF reconfirms that "The Digital Agenda can only take off if interoperability based on standards and open platforms is ensured" and also confirms that "The positive effect of open specifications is also demonstrated by the Internet ecosystem." - EIF takes a productive and pragmatic stance on openness: "In the context of the EIF, openness is the willingness of persons, organisations or other members of a community of interest to share knowledge and stimulate debate within that community, the ultimate goal being to advance knowledge and the use of this knowledge to solve problems" (p.11). "If the openness principle is applied in full: - All stakeholders have the same possibility of contributing to the development of the specification and public review is part of the decision-making process; - The specification is available for everybody to study; - Intellectual property rights related to the specification are licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software" (p. 26). - EIF is a formal Commission document. The former EIF 1.0 was a semi-formal deliverable from the PEGSCO, a working group of Member State representatives. - EIF tackles interoperability head-on and takes a clear stance: "Recommendation 22. When establishing European public services, public administrations should prefer open specifications, taking due account of the coverage of functional needs, maturity and market support." - The Commission will continue to support the National Interoperability Framework Observatory (NIFO), reconfirming the importance of coordinating such approaches across borders. - The Commission will align its internal interoperability strategy with the EIS through the eCommission initiative. - One cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. The EIF seems to have picked up on this fact: What does the EIF says about the relation between open specifications and open source software? The EIF introduces, as one of the characteristics of an open specification, the requirement that IPRs related to the specification have to be licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software. In this way, companies working under various business models can compete on an equal footing when providing solutions to public administrations while administrations that implement the standard in their own software (software that they own) can share such software with others under an open source licence if they so decide. - EIF is now among the center pieces of the Digital Agenda (even though this demands extensive inter-agency coordination in the Commission): "The EIS and the EIF will be maintained under the ISA Programme and kept in line with the results of other relevant Digital Agenda actions on interoperability and standards such as the ones on the reform of rules on implementation of ICT standards in Europe to allow use of certain ICT fora and consortia standards, on issuing guidelines on essential intellectual property rights and licensing conditions in standard-setting, including for ex-ante disclosure, and on providing guidance on the link between ICT standardisation and public procurement to help public authorities to use standards to promote efficiency and reduce lock-in.(Communication, p.7)" All in all, quite a few good things have happened to the document in the two years it has been on the shelf or was being re-written, depending on your perspective, in any case, awaiting the storms to calm. The Bad - While a certain pragmatism is required, and governments cannot migrate to full openness overnight, EIF gives a bit too much room for governments not to apply the openness principle in full. Plenty of reasons are given, which should maybe have been put as challenges to be overcome: "However, public administrations may decide to use less open specifications, if open specifications do not exist or do not meet functional interoperability needs. In all cases, specifications should be mature and sufficiently supported by the market, except if used in the context of creating innovative solutions". - EIF does not use the internationally established terminology: open standards. Rather, the EIF introduces the notion of "formalised specification". How do "formalised specifications" relate to "standards"? According to the FAQ provided: The word "standard" has a specific meaning in Europe as defined by Directive 98/34/EC. Only technical specifications approved by a recognised standardisation body can be called a standard. Many ICT systems rely on the use of specifications developed by other organisations such as a forum or consortium. The EIF introduces the notion of "formalised specification", which is either a standard pursuant to Directive 98/34/EC or a specification established by ICT fora and consortia. The term "open specification" used in the EIF, on the one hand, avoids terminological confusion with the Directive and, on the other, states the main features that comply with the basic principle of openness laid down in the EIF for European Public Services. Well, this may be somewhat true, but in reality, Europe is 30 year behind in terminology. Unless the European Standardization Reform gets completed in the next few months, most Member States will likely conclude that they will go on referencing and using standards beyond those created by the three European endorsed monopolists of standardization, CEN, CENELEC and ETSI. Who can afford to begin following the strict Brussels rules for what they can call open standards when, in reality, standards stemming from global standardization organizations, so-called fora/consortia, dominate in the IT industry. What exactly is EIF saying? Does it encourage Member States to go on using non-ESO standards as long as they call it something else? I guess I am all for it, although it is a bit cumbersome, no? Why was there so much interest around the EIF? The FAQ attempts to explain: Some Member States have begun to adopt policies to achieve interoperability for their public services. These actions have had a significant impact on the ecosystem built around the provision of such services, e.g. providers of ICT goods and services, standardisation bodies, industry fora and consortia, etc... The Commission identified a clear need for action at European level to ensure that actions by individual Member States would not create new electronic barriers that would hinder the development of interoperable European public services. As a result, all stakeholders involved in the delivery of electronic public services in Europe have expressed their opinions on how to increase interoperability for public services provided by the different public administrations in Europe. Well, it does not take two years to read 50 consultation documents, and the EU Standardization Reform is not yet completed, so, more pragmatically, you finally had to release the document. Ok, let's leave some of that aside because the document is out and some people are happy (and others definitely not). The Verdict Considering the controversy, the delays, the lobbying, and the interests at stake both in the EU, in Member States and among vendors large and small, this document is pretty impressive. As with a good wine that has not yet come to full maturity, let's say that it seems to be coming in in the 85-88/100 range, but only a more fine-grained analysis, enjoyment in good company, and ultimately, implementation, will tell. The European Commission has today adopted a significant interoperability initiative to encourage public administrations across the EU to maximise the social and economic potential of information and communication technologies. Today, we should rally around this achievement. Tomorrow, let's sit down and figure out what it means for the future.

    Read the article

  • Tutorials for .NET database app using SQLite

    - by ChrisC
    I have some MS Access experience, and had a class on console c++ apps, now I am trying to develop my first program. It's a little C# db app. I have the db tables and columns planned and keyed into VS, but that's where I'm stuck. I'm needing C#/VS tutorials that will guide me on configuring relationships, datatyping, etc, on the db so I can get it ready for testing of the schema. The only tutorials I've been able to find either talk about general db basics (ie, not helping me with VS/C#), or about C# communications with an existing SQL db. Thank you. (In case it matters, I'm using the open source System.Data.SQLite (sqlite.phxsoftware.com) for the db. I chose it over SQL Server CE after seeing a comparison between the two. Also I wanted a server-less version of SQL because this little app will be on other people's computers and I want to to do as little support as possible.)

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • Best Practices of fault toleration and reliability for scheduled tasks or services

    - by user177883
    I have been working on many applications which run as windows service or scheduled tasks. Now, i want to make sure that these applications will be fault tolerant and reliable. For example; i have a service that runs every hour. if the service crashes while its operating or running, i d like the application to run again for the same period, to avoid data loss. moreover, i d like the program to report the error with details. My goal is to avoid data loss and not falling behind for running the program. I have built a class library that a user can import into a project. Library is supposed to keep information of running instance of the program, ie. program reads and writes information of running interval, running status etc. This data is stored in a database. I was curious, if there are some best practices to make the scheduled tasks/ windows services fault tolerant and reliable.

    Read the article

  • iPhone and Core Data: how to retain user-entered data between updates?

    - by Shaggy Frog
    Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information. We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date. But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner? My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct? My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?

    Read the article

  • Can I perform some processing on the POST data before ASP.NET MVC UpdateModel happens?

    - by Domenic
    I would like to strip out non-numeric elements from the POST data before using UpdateModel to update the copy in the database. Is there a way to do this? // TODO: it appears I don't even use the parameter given at all, and all the magic // happens via UpdateModel and the "controller's current value provider"? [HttpPost] public ActionResult Index([Bind(Include="X1, X2")] Team model) // TODO: stupid magic strings { if (this.ModelState.IsValid) { TeamContainer context = new TeamContainer(); Team thisTeam = context.Teams.Single(t => t.TeamId == this.CurrentTeamId); // TODO HERE: apply StripWhitespace() to the data before using UpdateModel. // The data is currently somewhere in the "current value provider"? this.UpdateModel(thisTeam); context.SaveChanges(); this.RedirectToAction(c => c.Index()); } else { this.ModelState.AddModelError("", "Please enter two valid Xs."); } // If we got this far, something failed; redisplay the form. return this.View(model); } Sorry for the terseness, up all night working on this; hopefully my question is clear enough? Also sorry since this is kind of a newbie question that I might be able to get with a few hours of documentation-trawling, but I'm time-pressured... bleh.

    Read the article

  • How to store and remove dynamically and automatic variable of generic data type in custum list data

    - by Vineel Kumar Reddy
    Hi I have created a List data structure implementation for generic data type with each node declared as following. struct Node { void *data; .... .... } So each node in my list will have pointer to the actual data(generic could be anything) item that should be stored in the list. I have following signature for adding a node to the list AddNode(struct List *list, void* eledata); the problem is when i want to remove a node i want to free even the data block pointed by *data pointer inside the node structure that is going to be freed. at first freeing of datablock seems to be straight forward free(data) // forget about the syntax..... But if data is pointing to a block created by malloc then the above call is fine....and we can free that block using free function int *x = (int*) malloc(sizeof(int)); *x = 10; AddNode(list,(void*)x); // x can be freed as it was created using malloc what if a node is created as following int x = 10; AddNode(list,(void*)&x); // x cannot be freed as it was not created using malloc Here we cannot call free on variable x!!!! How do i know or implement the functionality for both dynamically allocated variables and static ones....that are passed to my list.... Thanks in advance...

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >