Search Results

Search found 45324 results on 1813 pages for 'open source'.

Page 554/1813 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Calling system commands from Perl

    - by Dan J
    In an older version of our code, we called out from Perl to do an LDAP search as follows: # Pass the base DN in via the ldapsearch-specific environment variable # (rather than as the "-b" paramater) to avoid problems of shell # interpretation of special characters in the DN. $ENV{LDAP_BASEDN} = $ldn; $lcmd = "ldapsearch -x -T -1 -h $gLdapServer" . <snip> " > $lworkfile 2>&1"; system($lcmd); if (($? != 0) || (! -e "$lworkfile")) { # Handle the error } The code above would result in a successful LDAP search, and the output of that search would be in the file $lworkfile. Unfortunately, we recently reconfigured openldap on this server so that a "BASE DC=" is specified in /etc/openldap/ldap.conf and /etc/ldap.conf. That change seems to mean ldapsearch ignores the LDAP_BASEDN environment variable, and so my ldapsearch fails. I've tried a couple of different fixes but without success so far: (1) I tried going back to using the "-b" argument to ldapsearch, but escaping the shell metacharacters. I started writing the escaping code: my $ldn_escaped = $ldn; $ldn_escaped =~ s/\/\\/g; $ldn_escaped =~ s/`/\`/g; $ldn_escaped =~ s/$/\$/g; $ldn_escaped =~ s/"/\"/g; That threw up some Perl errors because I haven't escaped those regexes properly in Perl (the line number matches the regex with the backticks in). Backticks found where operator expected at /tmp/mycommand line 404, at end of line At the same time I started to doubt this approach and looked for a better one. (2) I then saw some Stackoverflow questions (here and here) that suggested a better solution. Here's the code: print("Processing..."); # Pass the arguments to ldapsearch by invoking open() with an array. # This ensures the shell does NOT interpret shell metacharacters. my(@cmd_args) = ("-x", "-T", "-1", "-h", "$gLdapPool", "-b", "$ldn", <snip> ); $lcmd = "ldapsearch"; open my $lldap_output, "-|", $lcmd, @cmd_args; while (my $lline = <$lldap_output>) { # I can parse the contents of my file fine } $lldap_output->close; The two problems I am having with approach (2) are: a) Calling open or system with an array of arguments does not let me pass > $lworkfile 2>&1 to the command, so I can't stop the ldapsearch output being sent to screen, which makes my output look ugly: Processing...ldap_bind: Success (0) additional info: Success b) I can't figure out how to choose which location (i.e. path and file name) to the file handle passed to open, i.e. I don't know where $lldap_output is. Can I move/rename it, or inspect it to find out where it is (or is it not actually saved to disk)? Based on the problems with (2), this makes me think I should return back to approach (1), but I'm not quite sure how to

    Read the article

  • Perl: Deleting multiple re-occuring lines where a certain criteria is met

    - by george-lule
    Dear all, I have data that looks like below, the actual file is thousands of lines long. Event_time Cease_time Object_of_reference -------------------------- -------------------------- ---------------------------------------------------------------------------------- Apr 5 2010 5:54PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:55PM Apr 5 2010 6:43PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:58PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 5:58PM Apr 5 2010 6:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:01PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:04PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:04PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA As you can see, each file has a header which describes what the various fields stand for(event start time, event cease time, affected element). The header is followed by a number of dashes. My issue is that, in the data, you see a number of entries where the cease time is NULL i.e event is still active. All such entries must go i.e for each element where the alarm cease time is NULL, the start time, the cease time(in this case NULL) and the actual element must be deleted from the file. In the remaining data, all the text starting from word SubNetwork upto BtsSiteMgr= must also go. Along with the headers and the dashes. Final output should look like below: Apr 5 2010 5:55PM Apr 5 2010 6:43PM LUGALAMBO_900 Apr 5 2010 5:58PM Apr 5 2010 6:01PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 6:04PM KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM BULAGA Below is a Perl script that I have written. It has taken care of the headers, the dashes, the NULL entries but I have failed to delete the lines following the NULL entries so as to produce the above output. #!/usr/bin/perl use strict; use warnings; $^I=".bak" #Backup the file before messing it up. open (DATAIN,"<george_perl.txt")|| die("can't open datafile: $!"); # Read in the data open (DATAOUT,">gen_results.txt")|| die("can't open datafile: $!"); #Prepare for the writing while (<DATAIN>) { s/Event_time//g; s/Cease_time//g; s/Object_of_reference//g; s/\-//g; #Preceding 4 statements are for cleaning out the headers my $theline=$_; if ($theline =~ /NULL/){ next; next if $theline =~ /SubN/; } else{ print DATAOUT $theline; } } close DATAIN; close DATAOUT; Kindly help point out any modifications I need to make on the script to make it produce the necessary output. Will be very glad for your help Kind regards George.

    Read the article

  • How to configure custom binding to consume this WS secure Webservice using WCF?

    - by Soeteman
    Hello all, I'm trying to configure a WCF client to be able to consume a webservice that returns the following response message: Response message <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://myservice.wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:StatusResponse> <result> ... </result> </ns0:StatusResponse> </env:Body> </env:Envelope> To do this, I've constructed a custom binding (which doesn't work). I keep getting a "Security header is empty" message. My binding: <customBinding> <binding name="myCustomBindingForVestaServices"> <security authenticationMode="UserNameOverTransport" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11" securityHeaderLayout="Strict" includeTimestamp="false" requireDerivedKeys="true"> </security> <textMessageEncoding messageVersion="Soap11" /> <httpsTransport authenticationScheme="Negotiate" requireClientCertificate ="false" realm =""/> </binding> </customBinding> My request seems to be using the same SOAP and WS Security versions as the response, but use different namespace prefixes ("o" instead of "wsse"). Could this be the reason why I keep getting the "Security header is empty" message? Request message <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-d3b70d1f-0ebb-4a79-85e6-34f0d6aa3d0f-1"> <o:Username>user</o:Username> <o:Password>pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://myservice.wsdl"> <request xmlns="" xmlns:a="http://myservice.wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> ... </request> </getPrdStatus> </s:Body> </s:Envelope> How do I need to configure my WCF client binding to be able to consume this webservice? Any help greatly appreciated! Sander

    Read the article

  • Why is this XMLHttpRequest sample from Mozilla is not working in Firefox 3?

    - by j0rd4n
    I'm trying to get the sample code from Mozilla that consumes a REST web service to work under Firefox 3.0.10. The following code does NOT work in Firefox but does in IE 8! Why is this not working? Does IE 8 have support for XMLHttpRequest? Most examples I've seen use the ActiveX allocation. What should I be doing? XMLHttpRequest seems more standardized. Sample: var req = new XMLHttpRequest(); req.open('GET', 'http://localhost/myRESTfulService/resource', false); // throws 'undefined' exception req.send(null); if(req.status == 0) dump(req.responseText); The open statement is throwing an exception with the description 'undefined'. This is strange as I allocate the req object, am running it in Firefox, and checked to make sure it is defined before calling open (which it says it is of type 'object'). I've also tried the asynchronous version of this with no luck. EDIT 2: Below is my most recent code: function createRequestObject() { if( window.XMLHttpRequest ) { return new XMLHttpRequest(); } else if( window.ActiveXObject ) { return new ActiveXObject( "Microsoft.XMLHTTP" ); } return null; } function handleResponse( req ) { document.writeln( "Handling response..." ); // NEVER GETS CALLED if( req.readyState == 0 ) { document.writeln( "UNITIALIZED" ); } else if( req.readyState == 1 ) { document.writeln( "LOADING" ); } else if( req.readyState == 2 ) { document.writeln( "LOADED" ); } else if( req.readyState == 3 ) { document.writeln( "INTERACTIVE" ); } else if( req.readyState == 4 ) { document.writeln( "COMPLETE" ); if( req.status == 200 ) { document.writeln( "SUCCESS" ); } } } document.writeln( "" ); var req = createRequestObject(); try { document.writeln( "Opening service..." ); req.onreadystatechange = function() { handleResponse( req ); }; req.open('POST', 'http://localhost/test/test2.txt', true); // WORKS IN IE8 & NOT FIREFOX document.writeln( "Sending service request..." ); req.send(''); document.writeln( "Done" ); } catch( err ) { document.writeln( "ERROR: " + err.description ); } EDIT 3: Alright, I reworked this in jQuery. jQuery works great in IE but it throws 'Undefined' when running from Firefox. I double checked and 'Enable JavaScript' is turned on in Firefox - seems to work fine in all other web pages. Below is the jQuery code: function handleResponse( resp ) { alert( "Name: " + resp.Name ); alert( "URL: " + resp.URL ); } $(document).ready( function() { $("a").click( function(event) { try { $.get( "http://localhost/services/ezekielservices/configservice/ezekielservices.svc/test", "{}", function(data) { handleResponse( data ); }, "json" ); } catch( err ) { alert("'$.get' threw an exception: " + err.description); } event.preventDefault(); }); } ); // End 'ready' check Summary of Solution: Alright, web lesson 101. My problem was indeed cross-domain. I was viewing my site unpublished (just on the file system) which was hitting a published service. When I published my site under the same domain it worked. Which also brings up an important distinction between IE and Firefox. When IE experiences this scenario, it prompts the user whether or not they accept the cross-domain call. Firefox throws an exception. While I'm fine with an exception, a more descriptive one would have been helpful. Thanks for all those who helped me.

    Read the article

  • Session state provider and global.asax not interacting properly?

    - by yodaj007
    I'm experimenting with creating a crude, proof-of-concept session state store provider in ASP.Net. But I've got a problem and I'm not sure what to do about it. The website works properly when using the InProc provider. The Session_Start in global.asax is called on session creation as it should. But not if I implement my own provider. The Session_Start method from global.asax isn't being called at all if a new session is being created (that is, I delete the session state file). Am I missing something important here? public class TestSessionProvider : SessionStateStoreProviderBase { private const string ROOT = "c:\\projects\\sessions\\"; private const int TIMEOUT_MINUTES = 30; public string ApplicationName { get { return HostingEnvironment.ApplicationVirtualPath; } } private string GetFilename(string id) { string filename = String.Format("{0}_{1}.session", ApplicationName, id); char[] invalids = Path.GetInvalidPathChars(); for (int i = 0; i < invalids.Length; i++) { filename = filename.Replace(invalids[i], '_'); } return Path.Combine(ROOT, Path.GetFileName(filename)); } public override void Initialize(string name, NameValueCollection config) { if (config == null) throw new ArgumentNullException("config"); if (String.IsNullOrEmpty(name)) name = "Sporkalicious"; base.Initialize(name, config); } public override SessionStateStoreData CreateNewStoreData(HttpContext context, int timeout) { SessionStateItemCollection items = new SessionStateItemCollection(); HttpStaticObjectsCollection objects = SessionStateUtility.GetSessionStaticObjects(context); return new SessionStateStoreData(items, objects, TIMEOUT_MINUTES); } /// <summary> /// The CreateUninitializedItem method is used with cookieless sessions when the regenerateExpiredSessionId /// attribute is set to true, which causes SessionStateModule to generate a new SessionID value when an /// expired session ID is encountered. /// </summary> /// <param name="context"></param> /// <param name="id"></param> /// <param name="timeout"></param> public override void CreateUninitializedItem(HttpContext context, string id, int timeout) { FileStream fs = File.Open(GetFilename(id), FileMode.CreateNew, FileAccess.Write, FileShare.None); BinaryWriter writer = new BinaryWriter(fs); SessionStateItemCollection coll = new SessionStateItemCollection(); coll.Serialize(writer); fs.Flush(); fs.Close(); } public override SessionStateStoreData GetItem(HttpContext context, string id, out bool locked, out TimeSpan lockAge, out object lockId, out SessionStateActions actions) { return GetItemExclusive(context, id, out locked, out lockAge, out lockId, out actions); } public override SessionStateStoreData GetItemExclusive(HttpContext context, string id, out bool locked, out TimeSpan lockAge, out object lockId, out SessionStateActions actions) { locked = false; lockAge = TimeSpan.FromDays(1); lockId = 0; actions = SessionStateActions.None; if (!File.Exists(GetFilename(id))) { return null; } FileStream fs = File.Open(GetFilename(id), FileMode.Open, FileAccess.ReadWrite, FileShare.Read); BinaryReader reader = new BinaryReader(fs); SessionStateItemCollection coll = SessionStateItemCollection.Deserialize(reader); fs.Close(); return new SessionStateStoreData(coll, new HttpStaticObjectsCollection(), TIMEOUT_MINUTES); } public override void ReleaseItemExclusive(HttpContext context, string id, object lockId) { } public override void RemoveItem(HttpContext context, string id, object lockId, SessionStateStoreData item) { File.Delete(GetFilename(id)); } public override void ResetItemTimeout(HttpContext context, string id) { } public override void SetAndReleaseItemExclusive(HttpContext context, string id, SessionStateStoreData item, object lockId, bool newItem) { if (!File.Exists(GetFilename(id))) { CreateUninitializedItem(context, id, 10); } FileStream fs = File.Open(GetFilename(id), FileMode.Truncate, FileAccess.Write, FileShare.None); BinaryWriter writer = new BinaryWriter(fs); SessionStateItemCollection coll = (SessionStateItemCollection)item.Items; coll.Serialize(writer); fs.Flush(); fs.Close(); } public override bool SetItemExpireCallback(SessionStateItemExpireCallback expireCallback) { return false; } public override void InitializeRequest(HttpContext context){} public override void EndRequest(HttpContext context){} public override void Dispose(){} }

    Read the article

  • mediaplayer failure exception

    - by Rahulkapil
    I am working on an android application in which i have to play random sounds from my assets folder. there are some images also, when i click on any image from those images a sound must play regarding to that image from assets folder. i managed all but sometime my mediaplayer fails unexpectedly. I am attaching my code also. private Handler threadHandler = new Handler() { public void handleMessage(android.os.Message msg) { /*first*/ try{ InputStream ims1 = getAssets().open("images/" +dataAll_pic_name1); d1 = Drawable.createFromStream(ims1, null); rl1.setVisibility(View.VISIBLE); img1.setImageDrawable(d1); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd1); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { /*second*/ try{ InputStream ims2 = getAssets().open("images/" +dataAll_pic_name2); d2 = Drawable.createFromStream(ims2, null); rl2.setVisibility(View.VISIBLE); img2.setImageDrawable(d2); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd2); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { /*third*/ try{ InputStream ims3 = getAssets().open("images/" +dataAll_pic_name3); d3 = Drawable.createFromStream(ims3, null); rl3.setVisibility(View.VISIBLE); img3.setImageDrawable(d3); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd3); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { /*four*/ try{ InputStream ims4 = getAssets().open("images/" +dataAll_pic_name4); d4 = Drawable.createFromStream(ims4, null); rl4.setVisibility(View.VISIBLE); img4.setImageDrawable(d4); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd4); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { startAnimation(); //randomSoundPlay(); timer.schedule( new TimerTask(){ public void run() { System.out.println("Wait, what........................:"); try{ AssetFileDescriptor afd = getAssets().openFd("sounds/" + dataAll_sound_name); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { vg1.setClickable(true); vg2.setClickable(true); vg3.setClickable(true); vg4.setClickable(true); btn_spkr.setVisibility(View.VISIBLE); txtImage(); } }); }catch(Exception e){ e.printStackTrace(); } } }, delay_que); } }); }catch(Exception e){ e.printStackTrace(); } } }); }catch(Exception e){ e.printStackTrace(); } } }); }catch(Exception e){ e.printStackTrace(); } } }); }catch(Exception e){ e.printStackTrace(); } } }; in above code random images and sound sets in my activity. now when i click on any image a sound must play but sometimes it fails.. i tried but unable to resolve this issue. help me out. thanks in advance.

    Read the article

  • What is the best way to solve an Objective-C namespace collision?

    - by Mecki
    Objective-C has no namespaces; it's much like C, everything is within one global namespace. Common practice is to prefix classes with initials, e.g. if you are working at IBM, you could prefix them with "IBM"; if you work for Microsoft, you could use "MS"; and so on. Sometimes the initials refer to the project, e.g. Adium prefixes classes with "AI" (as there is no company behind it of that you could take the initials). Apple prefixes classes with NS and says this prefix is reserved for Apple only. So far so well. But appending 2 to 4 letters to a class name in front is a very, very limited namespace. E.g. MS or AI could have an entirely different meanings (AI could be Artificial Intelligence for example) and some other developer might decide to use them and create an equally named class. Bang, namespace collision. Okay, if this is a collision between one of your own classes and one of an external framework you are using, you can easily change the naming of your class, no big deal. But what if you use two external frameworks, both frameworks that you don't have the source to and that you can't change? Your application links with both of them and you get name conflicts. How would you go about solving these? What is the best way to work around them in such a way that you can still use both classes? In C you can work around these by not linking directly to the library, instead you load the library at runtime, using dlopen(), then find the symbol you are looking for using dlsym() and assign it to a global symbol (that you can name any way you like) and then access it through this global symbol. E.g. if you have a conflict because some C library has a function named open(), you could define a variable named myOpen and have it point to the open() function of the library, thus when you want to use the system open(), you just use open() and when you want to use the other one, you access it via the myOpen identifier. Is something similar possible in Objective-C and if not, is there any other clever, tricky solution you can use resolve namespace conflicts? Any ideas? Update: Just to clarify this: answers that suggest how to avoid namespace collisions in advance or how to create a better namespace are certainly welcome; however, I will not accept them as the answer since they don't solve my problem. I have two libraries and their class names collide. I can't change them; I don't have the source of either one. The collision is already there and tips on how it could have been avoided in advance won't help anymore. I can forward them to the developers of these frameworks and hope they choose a better namespace in the future, but for the time being I'm searching a solution to work with the frameworks right now within a single application. Any solutions to make this possible?

    Read the article

  • Google Chrome Frame and Facebook Javascript SDK - Cannot login

    - by Giannis Savvakis
    On the example below i have an html page with the javascript code needed to login to facebook. On the i have the Google Chrome Frame meta tag that makes the page run with google chrome frame. If you open this page with any browser the finish() callback runs normally. If you open it with Google Chrome Frame it never fires. So this means that every Facebook App that tries to login to gather user data cannot login. This happens if the page is opened with google frame. But even if i remove the meta tag so that the page can open with IE8 the page opens again with google chrome frame because Facebook opens google chrome frame by default. So because this is a Facebook app that runs inside an inside facebook.com it is forced to open with Google Chrome Frame! SERIOUS BUG! I have seen other people reporting it, someone has made a test facebook app also here: http://apps.facebook.com/gcftest/ appID and channelUrl are dummy in the example below. <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <meta name="generator" content= "HTML Tidy for Linux/x86 (vers 11 February 2007), see www.w3.org" /> <meta charset="utf-8" /> <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" /> <meta http-equiv="Pragma" content="no-cache" /> <meta http-equiv="Expires" content="0" /> <meta http-equiv="X-UA-Compatible" content="IE=Edge,chrome=IE8" /> <title>Facebook Login</title> <script type="text/javascript"> //<![CDATA[ // Load the SDK Asynchronously (function(d){ var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) { return; } js = d.createElement('script'); js.id = id; js.async = true; js.src = "//connect.facebook.net/en_US/all.js"; ref.parentNode.insertBefore(js, ref); }(document)); var appID = '0000000000000'; var channelUrl = '//myhost/channel.html'; // Init the SDK upon load window.fbAsyncInit = function() { FB.init({ appId : appID, // App ID channelUrl : channelUrl, status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); FB.Event.subscribe('auth.statusChange', function(response) { if(!response.authResponse) FB.login(finish, {scope: 'publish_actions,publish_stream'}); else finish(response); }); FB.getLoginStatus(finish); } function finish(response) { alert("Hello "+response.name); } //]]> </script> </head> <body> <h1>Facebook login</h1> <p>Do NOT close this window.</p> <p>please wait...</p> </body> </html>

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • MVC: returning multiple results on stream connection to implement HTML5 SSE

    - by eddo
    I am trying to set up a lightweight HTML5 Server-Sent Event implementation on my MVC 4 Web, without using one of the libraries available to implement sockets and similars. The lightweight approach I am trying is: Client side: EventSource (or jquery.eventsource for IE) Server side: long polling with AsynchController (sorry for dropping here the raw test code but just to give an idea) public class HTML5testAsyncController : AsyncController { private static int curIdx = 0; private static BlockingCollection<string> _data = new BlockingCollection<string>(); static HTML5testAsyncController() { addItems(10); } //adds some test messages static void addItems(int howMany) { _data.Add("started"); for (int i = 0; i < howMany; i++) { _data.Add("HTML5 item" + (curIdx++).ToString()); } _data.Add("ended"); } // here comes the async action, 'Simple' public void SimpleAsync() { AsyncManager.OutstandingOperations.Increment(); Task.Factory.StartNew(() => { var result = string.Empty; var sb = new StringBuilder(); string serializedObject = null; //wait up to 40 secs that a message arrives if (_data.TryTake(out result, TimeSpan.FromMilliseconds(40000))) { JavaScriptSerializer ser = new JavaScriptSerializer(); serializedObject = ser.Serialize(new { item = result, message = "MSG content" }); sb.AppendFormat("data: {0}\n\n", serializedObject); } AsyncManager.Parameters["serializedObject"] = serializedObject; AsyncManager.OutstandingOperations.Decrement(); }); } // callback which returns the results on the stream public ActionResult SimpleCompleted(string serializedObject) { ServerSentEventResult sar = new ServerSentEventResult(); sar.Content = () => { return serializedObject; }; return sar; } //pushes the data on the stream in a format conforming HTML5 SSE public class ServerSentEventResult : ActionResult { public ServerSentEventResult() { } public delegate string GetContent(); public GetContent Content { get; set; } public int Version { get; set; } public override void ExecuteResult(ControllerContext context) { if (context == null) { throw new ArgumentNullException("context"); } if (this.Content != null) { HttpResponseBase response = context.HttpContext.Response; // this is the content type required by chrome 6 for server sent events response.ContentType = "text/event-stream"; response.BufferOutput = false; // this is important because chrome fails with a "failed to load resource" error if the server attempts to put the char set after the content type response.Charset = null; string[] newStrings = context.HttpContext.Request.Headers.GetValues("Last-Event-ID"); if (newStrings == null || newStrings[0] != this.Version.ToString()) { string value = this.Content(); response.Write(string.Format("data:{0}\n\n", value)); //response.Write(string.Format("id:{0}\n", this.Version)); } else { response.Write(""); } } } } } The problem is on the server side as there is still a big gap between the expected result and what's actually going on. Expected result: EventSource opens a stream connection to the server, the server keeps it open for a safe time (say, 2 minutes) so that I am protected from thread leaking from dead clients, as new message events are received by the server (and enqueued to a thread safe collection such as BlockingCollection) they are pushed in the open stream to the client: message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+200ms Actual behaviour: EventSource opens a stream connection to the server, the server keeps it open until a message event arrives (thanks to long polling) once a message is received, MVC pushes the message and closes the connection. EventSource has to reopen the connection and this happens after a couple of seconds. message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+3200ms This is not OK as it defeats the purpose of using SSE as the clients start again reconnecting as in normal polling and message delivery gets delayed. Now, the question: is there a native way to keep the connection open after sending the first message and sending further messages on the same connection?

    Read the article

  • How do I detect a file write error in C?

    - by rich
    I have an embedded environment where a user might insert or remove a USB flash drive. I would like to know if the drive has been removed, or if there is some other problem when I try to write to the drive. However, Linux just saves the information in its buffers and returns with no indicated error. The computer I'm using comes with a 2.4.26 kernel and libc 2.3.2. I'm mounting the drive this way: i = mount(MEMORY_DEV_PATH, MEMORY_MNT_PATH, "vfat", MS_SYNCHRONOUS, NULL); That works: 50:/root # mount /dev/scsi/host0/bus0/target0/lun0/part1 on /mem type vfat (rw,sync) 50:/root # Later, I try to copy a file to it: int ifile, ofile; ifile = open("/tmp/tmpmidi.mid", O_RDONLY); if (ifile < 0) { perror("open in"); break; } ofile = open(current_file_name.c_str(), O_WRONLY | O_SYNC); if (ofile < 0) { perror("open out"); break; } #define BUFSZ 256 char buffer[BUFSZ]; while (1) { i = read(ifile, buffer, BUFSZ); if (i < 0) { perror("read"); break; } j = write(ofile, buffer, i); if (j < 0) { perror("write"); break; } if (i != j) { perror("Sizes wrong"); break; } if (i < BUFSZ) { printf("Copy is finished, I hope\n"); close(ifile); close(ofile); break; } } If this snippet of code is executed with a write-protected USB memory, the result is Copy is finished, I hope amid a flurry of error messages from the kernel on the console. I believe the same thing would happen if I simply removed the USB drive (without unmounting it). I have also fiddled with devfs. I figured out how to get it to automatically mount the drive, (with the REGISTER event) but it never seems to trigger the UNREGISTER when I pull out the memory. How can I determine in my program whether I have successfully created a file? Update 4 July: It was a silly oversight of me not to check the result from close(). Unfortunately, the file can be closed without error. So that didn't help. What about fsync()? That sounds like a good idea, but that didn't catch the error either. There might be some interesting information in /sys if I had such a thing. I believe that didn't get added until 2.6.?. The comment(s) about the quality of my flash drive are probably justified. It's one of the earlier ones. In fact, write protect switches seem to be extremely rare these days. I think I have to use the overkill option: Create a file, unmount & remount the drive, and check to see if the file is there. If that doesn't solve my problem, then something is really messed up! Note to myself: Make sure the file you try to create isn't already there! By the way, this does happen to be a C++ program. You can tell by the .c_str() which I had intended to edit out for simplicity.

    Read the article

  • C# Object Problem - Can't Solve It

    - by user612041
    I'm getting the error 'Object reference not set to an instance of an object'. I've tried looking at similar problems but genuinely cannot see what the problem is with my program. The line of code that I am having an error with is: labelQuestion.Text = table.Rows[0]["Question"].ToString(); Here is my code in its entirety: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Data.OleDb; using System.Data.Sql; using System.Data.SqlClient; namespace Quiz_Test { public partial class Form1 : Form { public Form1() { InitializeComponent(); } String chosenAnswer, correctAnswer; DataTable table; private void Form1_Load(object sender, EventArgs e) { //declare connection string using windows security string cnString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\\Users\\Hannah\\Desktop\\QuizQuestions.accdb"; //declare Connection, command and other related objects OleDbConnection conGet = new OleDbConnection(cnString); OleDbCommand cmdGet = new OleDbCommand(); //try //{ //open connection conGet.Open(); //String correctAnswer; cmdGet.CommandType = CommandType.Text; cmdGet.Connection = conGet; cmdGet.CommandText = "SELECT * FROM QuizQuestions ORDER BY rnd()"; OleDbDataReader reader = cmdGet.ExecuteReader(); reader.Read(); labelQuestion.Text = table.Rows[0]["Question"].ToString(); radioButton1.Text = table.Rows[0]["Answer 1"].ToString(); radioButton2.Text = table.Rows[0]["Answer 2"].ToString(); radioButton3.Text = table.Rows[0]["Answer 3"].ToString(); radioButton4.Text = table.Rows[0]["Answer 4"].ToString(); correctAnswer = table.Rows[0]["Correct Answer"].ToString(); ; conGet.Close(); } private void btnSelect_Click(object sender, EventArgs e) { String cnString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\\Users\\Hannah\\Desktop\\QuizQuestions.accdb"; //declare Connection, command and other related objects OleDbConnection conGet = new OleDbConnection(cnString); OleDbCommand cmdGet = new OleDbCommand(); //try { //open connection conGet.Open(); cmdGet.CommandType = CommandType.Text; cmdGet.Connection = conGet; cmdGet.CommandText = "SELECT * FROM QuizQuestions ORDER BY rnd()"; // select all columns in all rows OleDbDataReader reader = cmdGet.ExecuteReader(); reader.Read(); if (radioButton1.Checked) { chosenAnswer = reader["Answer 1"].ToString(); } else if (radioButton2.Checked) { chosenAnswer = reader["Answer 2"].ToString(); } else if (radioButton3.Checked) { chosenAnswer = reader["Answer 3"].ToString(); } else { chosenAnswer = reader["Answer 4"].ToString(); } if (chosenAnswer == reader["Correct Answer"].ToString()) { //chosenCorrectly++; MessageBox.Show("You have got this answer correct"); //label2.Text = "You have got " + chosenCorrectly + " answers correct"; } else { MessageBox.Show("That is not the correct answer"); } } } } } I realise the problem isn't too big but I can't see how my declaration timings are wrong

    Read the article

  • Serious problem with WCF, GridViews, Callbacks and ExecuteReaders exceptions.

    - by barjed
    Hi, I have this problem that is driving me insane. I have a project to deliver before Thursday. Basically an app consiting of three components that communicate with each other in WCF. I have one console app and one Windows Forms app. The console app is a server that's connected to the database. You can add records to it via the Windows Forms client that connectes with the server through the WCF. The code for the client: namespace BankAdministratorClient { [CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Single, UseSynchronizationContext = false)] public partial class Form1 : Form, BankServverReference.BankServerCallback { private BankServverReference.BankServerClient server = null; private SynchronizationContext interfaceContext = null; public Form1() { InitializeComponent(); interfaceContext = SynchronizationContext.Current; server = new BankServverReference.BankServerClient(new InstanceContext(this), "TcpBinding"); server.Open(); server.Subscribe(); refreshGridView(""); } public void refreshClients(string s) { SendOrPostCallback callback = delegate(object state) { refreshGridView(s); }; interfaceContext.Post(callback, s); } public void refreshGridView(string s) { try { userGrid.DataSource = server.refreshDatabaseConnection().Tables[0]; } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } private void buttonAdd_Click(object sender, EventArgs e) { server.addNewAccount(Int32.Parse(inputPIN.Text), Int32.Parse(inputBalance.Text)); } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { try { server.Unsubscribe(); server.Close(); }catch{} } } } The code for the server: namespace SSRfinal_tcp { class Program { static void Main(string[] args) { Console.WriteLine(MessageHandler.dataStamp("The server is starting up")); using (ServiceHost server = new ServiceHost(typeof(BankServer))) { server.Open(); Console.WriteLine(MessageHandler.dataStamp("The server is running")); Console.ReadKey(); } } } [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, InstanceContextMode = InstanceContextMode.PerCall, IncludeExceptionDetailInFaults = true)] public class BankServer : IBankServerService { private static DatabaseLINQConnectionDataContext database = new DatabaseLINQConnectionDataContext(); private static List<IBankServerServiceCallback> subscribers = new List<IBankServerServiceCallback>(); public void Subscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (!subscribers.Contains(callback)) subscribers.Add(callback); Console.WriteLine(MessageHandler.dataStamp("A new Bank Administrator has connected")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to connect")); } } public void Unsubscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (subscribers.Contains(callback)) subscribers.Remove(callback); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has been signed out from the connection list")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to sign out from the connection list")); } } public DataSet refreshDatabaseConnection() { var q = from a in database.GetTable<Account>() select a; DataTable dt = q.toTable(rec => new object[] { q }); DataSet data = new DataSet(); data.Tables.Add(dt); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has requested a database data listing refresh")); return data; } public void addNewAccount(int pin, int balance) { Account acc = new Account() { PIN = pin, Balance = balance, IsApproved = false }; database.Accounts.InsertOnSubmit(acc); database.SubmitChanges(); database.addNewAccount(pin, balance, false); subscribers.ForEach(delegate(IBankServerServiceCallback callback) { callback.refreshClients("New operation is pending approval."); }); } } } This is really simple and it works for a single window. However, when you open multiple instances of the client window and try to add a new record, the windows that is performing the insert operation crashes with the ExecuteReader error and the " requires an open and available connection. the connection's current state is connecting" bla bla stuff. I have no idea what's going on. Please advise.

    Read the article

  • Clear/ Reset Result Table of Search page in OAF

    - by PRajkumar
    Normally problem faced by developers after creating Search Page is how to Clear/ Reset Result Table when developer open search page first time or after search when developer redirecting back to same search page from any other page (say delete page or update page)   Add following Code in your Search page Controller where you have constructed your Query Region   import oracle.apps.fnd.framework.webui.beans.layout.OAQueryBean; ... public void processRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processRequest(pageContext, webBean);  OAQueryBean queryBean = (OAQueryBean)webBean.findChildRecursive("QueryRN");   // Here QueryRN is your Query Region Name as shown in following snap shot  queryBean.clearSearchPersistenceCache(pageContext); }     Note – After add this code, no need to worry about state of Application Module (AM). This code will clean up result table automatically every time when you will open Search page first time and when you are redirecting back to search page. But still as per good coding standard while redirecting back to search page always keep AM state to FALSE

    Read the article

  • Stupid Geek Tricks: How to Perform Date Calculations in Windows Calculator

    - by Usman
    Would you like to know how many days old are you today? Can you tell what will be the date 78 days from now? How many days are left till Christmas? How many days have passed since your last birthday? All these questions have their answers hidden within Windows! Curious? Keep reading to see how you can answer these questions in an instant using Windows’ built-in utility called ‘Calculator’. No, no. This isn’t a guide to show you how to perform basic calculations on calculator. This is an application of a unique feature in the Calculator application in Windows, and the feature is called Date Calculation. Most of us don’t really use the Windows’ Calculator that much, and when we do, it’s only for an instant (to do small calculations). However, it is packed with some really interesting features, so lets go ahead and see how Date Calculation works. To start, open Calculator by pressing the winkey, and type calcul… (it should’ve popped up by now, if not, you can type the rest of the ‘…ator’ as well just to be sure). Open it. And by the way, this date calculation function works in both Windows 7 and 8. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • SQL SERVER – Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video

    - by pinaldave
    One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. I have earlier written an article about this which describes how to resolve the errors which are related to SQL Server connection. Even though the step by step directions are pretty simple there are many first time IT Professional who are not able to figure out how to resolve this error. I have quickly built a video which is covering most of the solutions related to resolving the connection error. In the Fix SQL Server Connection Error article following workarounds are described: SQL Server Services TCP/IP Settings Firewall Settings Enable Remote Connection Browser Services Firewall exception of sqlbrowser.exe Recreating Alias Related Tips in SQL in Sixty Seconds: SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: ) SQL SERVER – Could not connect to TCP error code 10061: No connection could be made because the target machine actively refused it SQL SERVER – Connecting to Server Using Windows Authentication by SQLCMD SQL SERVER – Fix : Error: 15372 Failed to generate a ser instance od SQL Server due to a failure in starting the process for the user instance. The connection will be closed SQL SERVER – Dedicated Access Control for SQL Server Express Edition – An error occurred while obtaining the dedicated administrator connection (DAC) port. SQL SERVER – Fix : Error: 4064 – Cannot open user default database. Login failed. Login failed for user What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • How do I run Firefox OS as a standalone application?

    - by JamesTheAwesomeDude
    I got the add-on for the Firefox OS simulator, and it works great! It even keeps functioning after Firefox is closed, so I can save processing power for other things. I'd like to run it as a standalone application, so that I don't even have to open Firefox in the first place. I've gone to the System Monitor, and it says that the process (I guessed which by CPU usage and filename) was started via /home/james/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g/plugin-container 3386 true tab, so I tried running that in the Terminal (after I'd closed the simulator, of course,) but it gives this: james@james-OptiPlex-GX620:~/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g$ ./plugin-container 3386 true tab ./plugin-container: error while loading shared libraries: libxpcom.so: cannot open shared object file: No such file or directory james@james-OptiPlex-GX620:~/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g$ What should I do? Is what I'm attempting even possible? (It should be, since the simulator kept running even after Firefox itself was closed...) NOTE: I've tried chmod u+sx plugin-container, but that didn't help.

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 02, 2010

    CodePlex Daily Summary for Tuesday, March 02, 2010New ProjectsAcceptance Test Excel Addin: Acceptance Test Excel Addin is a tool to author, execute and analyze acceptance tests in Excel. The tester write tests using Given-When-Then (Gherk...Adrastus: Related to manage Issue/Item of any type. To be defined later TBDAn opportunity of upgrading Linux operating system: Want to add a new software in Linux operating system? Here is an opportunity. UFS (User Friendly Scheduler) has been designed to make the operating...AspNetPager: AspNetPager is a free custom paging control for ASP.NET web form application. It's one of the most popular ASP.NET third party controls used by chi...AzureRunMe: Run Java, Ruby, Python, [insert language of your choice] applications in Windows Azure. You provide a self contained ZIP file with a runme.bat f...DiffPlex - a .NET Diff Generator: DiffPlex is a combination of a .NET Diffing Library with both a Silverlight and HTML diff viewer.GPA WebClient: GPA project web client.Maintenance Service: A lot of projects need to have a windows service that execute different tasks. If am tired of creating the same service for all this project, so He...Marager Component Framework: Marager Component Framework (mcf)Pod Thrower: An application that gives a simple way to create podcast rss feeds from local computer folders.Rapidshare Episode Downloader: Rapidshare Episode Downloader is a software that enables you (yes, you!) to organize your many episodes of different TV shows in a nice list, prese...Resxus - Total .net string resource management tool: RESXUS is a resource file management tool which is being created under my job-experience of managing multilingual resx files. The first goal of th...SLFX: SLFXTheWhiteAmbit: Hybrid Scanline-Raytracing Engine. VisitorPattern based Scenegraph written in C++. DirectX9 or DirectX10 rendering is used for PrimaryRays and CUDA...TSqlMigrations: Yet another migrations platform right? This is purely sql based (tsql as it is only for Sql Server at this time). This tool is meant to help mana...WAFFLE: Windows Authentication Functional Framework (LE): WAFFLE - Windows Authentication Functional Framework (Light Edition) is a .NET library with a COM interface and a Java bridge that provides a worki...web lib api: This project aim, to pull together the major, web api/webservices for each relervant categorie.WSDLGenerator: A tool to generate a WSDL file from a c# dll which contains one more Microsoft WebServices. The project is build using VS2010RC and uses .net Fram...XNA Re-usable UI Components: The aim of this project is to create a re-usable set of UI game components helping reduce production time for your game. More information can be...YUI Compressor Custom Tool for Visual Studio: This EXTREMELY simple custom tool is used to automatically generate a *.min.css file from your existing code on save. It is merely a packaged versi...New ReleasesAcceptance Test Excel Addin: 1.0.0.0: How to Use Extract AcceptanceTestExcelAddIn-1.0.0.0.zip Run setup.exe Extract PasswordSample.zip Open Excel, your will see a new tab, "QA To...An opportunity of upgrading Linux operating system: UFS: The software provided by me is just a basic one that will run in the terminal through gcc compiler. The developers are therefore requested to make ...AspNetPager: Demo project: AspNetPager version 7.3.2 demo web site projectBusiness Framework: Formula Samples: A sample demonstrating textual language - Formula. It can be used is many business scenarios allowing end-user to configure or interact with the sy...Deblector: Deblector 1.1: This build fixes compatibility with .NET Reflector 6.Desktop Dimmer: March 2010: First release, March 2010EasyDump: EasyDump 1.0.1: Easy Dump 1.0.1 fix duplicate output when execute twiceExtensia: Extensia: Extensia is a very large list of extension methods and a few helper types. Many methods have practical utility (e.g. console parsing) whilst some ...Fluent Assertions: Fluent Assertions release 1.0: The first release of the Fluent Assertions. It contains assertions for the most common types and has several extension points.FolderSize: FolderSize.Win32.1.0.6.0: FolderSize.Win32.1.0.6.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GamerShots.com Screenshot Capture: GamerShots.com Screenshot Capture: Windows Form application written in C# ASP.Net. Allows the user to capture a screen by pressing the "Print Scrn" key or by user input. Then uses ...Jet Login Tool (JetLoginTool): Stopped - 1.5.3713.17328: Fixed: Engine will now actually stop when the "Stop" button is hitJolt Environment - RuneScape Emulator: Jolt Environment 1.0.6000 GOLD: Features since 1.0.3200: - Account Creation via client - Character Saving/Loading (via MySQL serialization) - Ground Objects - Ground Items (with m...jQuery Library for SharePoint Web Services: SPServices 0.5.2: NOTE: While I work on new releases, I post alpha versions. Usually the alpha versions are here to address a particular need. I DO NOT recommend us...LINQ to XSD: 1.0.0: The LINQ to XSD technology provides .NET developers with support for typed XML programming. LINQ to XSD contributes to the LINQ project (.NET Langu...Maintenance Service: Alpha Release: Alpha Release of the SoftwareMDownloader: MDownloader-0.15.5.56206: Fixed many gathered bugs;MDownloader: MDownloader-0.15.6.56217: Fixed retrieving hotfile data using registered accounts.MRDS Services for HiTechnic: HiTechnic Controllers: The HiTechnic Controllers package contains services for MRDS that work with the LEGO NXT and TETRIX Servo and Motor Controllers. Initial ReleaseCo...Open NFe: DANFE 1.9.2: Correções do DANFE que serão incluídas na versão 1.9.2PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT-2.4: While testing a standard software parts kit library for strict portability, The following error condition occurred when using the Windows version ...Rapidshare Episode Downloader: RED 0.8: This is an almost fully working version of the software. What DOESN'T work: Showing the list of rapidshare search results (but query is being made...Reusable Library: V1.0.4: A collection of reusable abstractions for enterprise application developer.SharePoint LogViewer: SharePointLogViewer 1.5.1: Follwoing bugs are fixed Bookmarks deleted on refresh/reload Bookmarks not properly navigated on filtered list. Disabled toolbar buttons did n...SharePoint Taxonomy Extensions: SharePoint Taxonomy Extensions 1.1-1: - Some bugfixes - Possibility to switch between alphanummeric und manual sortingSilverSynth - Digital Audio Synthesis for Silverlight: SilverSynth 1.1: SilverSynth 1.1 is a zip file of the source code and includes an updated version of the demo application including presets.SQL Server Reporting Services MSBuild Tasks: Release 1.1.14669: New Features New Task added for Integrated and Native mode: DeleteReportUser Task ReportUserExists TaskTellago DevLabs: BizTalk Data Services v0.2: This release is the first version of the BizTalk Data Services API, a RESTful API for BizTalk Server based on the Open Data (OData) Protocol. The ...VCC: Latest build, v2.1.30301.0: Automatic drop of latest buildWAFFLE: Windows Authentication Functional Framework (LE): 1.2: Build 1.2.4217.0, initial open-source release. - Account lookup locally and in Active Directory. - Enumerating Active Directory domains. - Returns...Watermarker: 0.87: 01.03.2010: • FIXED: some stability fixes • ADDED: ability to choose any number of pictures and folder to save them after the operation completesWSDLGenerator: WSDLGenerator 0.0.0.1: Initial versionXNA Re-usable UI Components: Re-usable Game Components V1.0: First public release of the source codeYUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool with Installer v0.1a: Initial release with alpha installer - documentation to follow.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETImage Resizer Powertoy Clone for WindowsMost Active ProjectsRawrBlogEngine.NETMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesSharpMap - Geospatial Application Framework for the CLRRapid Entity Framework (ORM). CTP 2DiffPlex - a .NET Diff GeneratorMDT Web FrontEndWPF Dialogs

    Read the article

  • Microsoft’s Contribution to jQuery – Client Templating

    - by joelvarty
    I am interested to see the community’s response to Microsoft’s contributions to jQuery.  I have been using jTemplates on and off in my apps for a while, but I will certainly check out the new templating plugins put forth by MS and explained here by Scott Guthrie. It may be that some are against the very idea of a company like Microsoft being involved with jQuery, and Scott explains the process with the following: “jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community.” I think we can take this in one of two ways:  It’s great that Microsoft sees themselves as a part of a greater community that they can support. It’s the first step in Microsoft’s attempt to usurp the community and have greater control over the web, it’s standards, and it’s developer community. Personally, I believe Microsoft sees the world (and the web) differently from how they did back when IE had more than %80 of the browser market.  Now, in order to keep it’s development products relevant, they are pushing Asp.Net (as they have been for a few years) towards a more open strategy that’s more “web-like” in my opinion. These contributions to jQuery are a good thing, I think.  Now, let’s go try out these new plug-ins and see if they stack up… more later - joel

    Read the article

  • Windows Phone 7 ActiveSync error 86000C09 (My First Post!)

    - by Chris Heacock
    Hello fellow geeks! I'm kicking off this new blog with an issue that was a real nuisance, but was relatively easy to fix. During a recent Exchange 2003 to 2010 migration, one of the users was getting an error on his Windows Phone 7 device. The error code that popped up on the phone on every sync attempt was 86000C09 We tested the following: Different user on the same device: WORKED Problem user on a different device: FAILED   Seemed to point (conclusively) at the user's account as the crux of the issue. This error can come up if a user has too many devices syncing, but he had no other phones. We verified that using the following command: Get-ActiveSyncDeviceStatistics -Identity USERID Turns out, it was the old familiar inheritable permissions issue in Active Directory. :-/ This user was not an admin, nor had he ever been one. HOWEVER, his account was cloned from an ex-admin user, so the unchecked box stayed unchecked. We checked the box and voila, data started flowing to his device(s). Here's a refresher on enabling Inheritable permissions: Open ADUC, and enable Advanced Features: Then open properties and go to the Security tab for the user in question: Click on Advanced, and the following screen should pop up: Verify that "Include inheritable permissions from this object's parent" is *checked*.   You will notice that for certain users, this box keeps getting unchecked. This is normal behavior due to the inbuilt security of Active Directory. People that are in the following groups will have this flag altered by AD: Account Operators Administrators Backup Operators Domain Admins Domain Controllers Enterprise Admins Print Operators Read-Only Domain Controllers Replicator Schema Admins Server Operators Once the box is cheked, permissions will flow and the user will be set correctly. Even if the box is unchecked, they will function normally as they now has the proper permissions configured. You need to perform this same excercise when enabling users for Lync, but that's another blog. :-)   -Chris

    Read the article

  • Game Development World Championship 2013 for all game developers

    - by Hanhviope
    Interested in games and programming? Want to be visible in global game industry? Missing Viope Game Programming Contest 2012? Want to win a trip to Finland, visit top game studio and other attractive rewards? This is your CHANCE! Viope Solutions proudly announces Game Development World Championship 2013, as a sequel of successful Viope Game Programming Contest 2012 WHAT? The contest is organized by Viope Solutions. Students and freelancers are invited to compete in different categories. Participants can compete for Computer/Console game or Mobile Phone game. The competition involves partners and judges from Rovio, Microsoft, Unity, ArtiGames, Housemarque, Redlynx, Remedy, GrandCru, GameReactor and IGDA WHO? The contest is open to everyone around the world. WHERE? The submission of your game will be done via Viope World e-learning platform. WHEN? The contest is open from 08th October 2013 till 26th January 2014. HOW? Individuals and team of up to 4 members can register through our website. For information, please visit website www.viope.com/contest WE CHALLENGE YOU TO CREATE THE BEST GAMES EVER! Share this to all your friends who would be interested in this contest!

    Read the article

  • How to Reuse Your Old Wi-Fi Router as a Network Switch

    - by Jason Fitzpatrick
    Just because your old Wi-Fi router has been replaced by a newer model doesn’t mean it needs to gather dust in the closet. Read on as we show you how to take an old and underpowered Wi-Fi router and turn it into a respectable network switch (saving your $20 in the process). Image by mmgallan. Why Do I Want To Do This? Wi-Fi technology has changed significantly in the last ten years but Ethernet-based networking has changed very little. As such, a Wi-Fi router with 2006-era guts is lagging significantly behind current Wi-Fi router technology, but the Ethernet networking component of the device is just as useful as ever; aside from potentially being only 100Mbs instead of 1000Mbs capable (which for 99% of home applications is irrelevant) Ethernet is Ethernet. What does this matter to you, the consumer? It means that even though your old router doesn’t hack it for your Wi-Fi needs any longer the device is still a perfectly serviceable (and high quality) network switch. When do you need a network switch? Any time you want to share an Ethernet cable among multiple devices, you need a switch. For example, let’s say you have a single Ethernet wall jack behind your entertainment center. Unfortunately you have four devices that you want to link to your local network via hardline including your smart HDTV, DVR, Xbox, and a little Raspberry Pi running XBMC. Instead of spending $20-30 to purchase a brand new switch of comparable build quality to your old Wi-Fi router it makes financial sense (and is environmentally friendly) to invest five minutes of your time tweaking the settings on the old router to turn it from a Wi-Fi access point and routing tool into a network switch–perfect for dropping behind your entertainment center so that your DVR, Xbox, and media center computer can all share an Ethernet connection. What Do I Need? For this tutorial you’ll need a few things, all of which you likely have readily on hand or are free for download. To follow the basic portion of the tutorial, you’ll need the following: 1 Wi-Fi router with Ethernet ports 1 Computer with Ethernet jack 1 Ethernet cable For the advanced tutorial you’ll need all of those things, plus: 1 copy of DD-WRT firmware for your Wi-Fi router We’re conducting the experiment with a Linksys WRT54GL Wi-Fi router. The WRT54 series is one of the best selling Wi-Fi router series of all time and there’s a good chance a significant number of readers have one (or more) of them stuffed in an office closet. Even if you don’t have one of the WRT54 series routers, however, the principles we’re outlining here apply to all Wi-Fi routers; as long as your router administration panel allows the necessary changes you can follow right along with us. A quick note on the difference between the basic and advanced versions of this tutorial before we proceed. Your typical Wi-Fi router has 5 Ethernet ports on the back: 1 labeled “Internet”, “WAN”, or a variation thereof and intended to be connected to your DSL/Cable modem, and 4 labeled 1-4 intended to connect Ethernet devices like computers, printers, and game consoles directly to the Wi-Fi router. When you convert a Wi-Fi router to a switch, in most situations, you’ll lose two port as the “Internet” port cannot be used as a normal switch port and one of the switch ports becomes the input port for the Ethernet cable linking the switch to the main network. This means, referencing the diagram above, you’d lose the WAN port and LAN port 1, but retain LAN ports 2, 3, and 4 for use. If you only need to switch for 2-3 devices this may be satisfactory. However, for those of you that would prefer a more traditional switch setup where there is a dedicated WAN port and the rest of the ports are accessible, you’ll need to flash a third-party router firmware like the powerful DD-WRT onto your device. Doing so opens up the router to a greater degree of modification and allows you to assign the previously reserved WAN port to the switch, thus opening up LAN ports 1-4. Even if you don’t intend to use that extra port, DD-WRT offers you so many more options that it’s worth the extra few steps. Preparing Your Router for Life as a Switch Before we jump right in to shutting down the Wi-Fi functionality and repurposing your device as a network switch, there are a few important prep steps to attend to. First, you want to reset the router (if you just flashed a new firmware to your router, skip this step). Following the reset procedures for your particular router or go with what is known as the “Peacock Method” wherein you hold down the reset button for thirty seconds, unplug the router and wait (while still holding the reset button) for thirty seconds, and then plug it in while, again, continuing to hold down the rest button. Over the life of a router there are a variety of changes made, big and small, so it’s best to wipe them all back to the factory default before repurposing the router as a switch. Second, after resetting, we need to change the IP address of the device on the local network to an address which does not directly conflict with the new router. The typical default IP address for a home router is 192.168.1.1; if you ever need to get back into the administration panel of the router-turned-switch to check on things or make changes it will be a real hassle if the IP address of the device conflicts with the new home router. The simplest way to deal with this is to assign an address close to the actual router address but outside the range of addresses that your router will assign via the DHCP client; a good pick then is 192.168.1.2. Once the router is reset (or re-flashed) and has been assigned a new IP address, it’s time to configure it as a switch. Basic Router to Switch Configuration If you don’t want to (or need to) flash new firmware onto your device to open up that extra port, this is the section of the tutorial for you: we’ll cover how to take a stock router, our previously mentioned WRT54 series Linksys, and convert it to a switch. Hook the Wi-Fi router up to the network via one of the LAN ports (consider the WAN port as good as dead from this point forward, unless you start using the router in its traditional function again or later flash a more advanced firmware to the device, the port is officially retired at this point). Open the administration control panel via  web browser on a connected computer. Before we get started two things: first,  anything we don’t explicitly instruct you to change should be left in the default factory-reset setting as you find it, and two, change the settings in the order we list them as some settings can’t be changed after certain features are disabled. To start, let’s navigate to Setup ->Basic Setup. Here you need to change the following things: Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable Save with the “Save Settings” button and then navigate to Setup -> Advanced Routing: Operating Mode: Router This particular setting is very counterintuitive. The “Operating Mode” toggle tells the device whether or not it should enable the Network Address Translation (NAT)  feature. Because we’re turning a smart piece of networking hardware into a relatively dumb one, we don’t need this feature so we switch from Gateway mode (NAT on) to Router mode (NAT off). Our next stop is Wireless -> Basic Wireless Settings: Wireless SSID Broadcast: Disable Wireless Network Mode: Disabled After disabling the wireless we’re going to, again, do something counterintuitive. Navigate to Wireless -> Wireless Security and set the following parameters: Security Mode: WPA2 Personal WPA Algorithms: TKIP+AES WPA Shared Key: [select some random string of letters, numbers, and symbols like JF#d$di!Hdgio890] Now you may be asking yourself, why on Earth are we setting a rather secure Wi-Fi configuration on a Wi-Fi router we’re not going to use as a Wi-Fi node? On the off chance that something strange happens after, say, a power outage when your router-turned-switch cycles on and off a bunch of times and the Wi-Fi functionality is activated we don’t want to be running the Wi-Fi node wide open and granting unfettered access to your network. While the chances of this are next-to-nonexistent, it takes only a few seconds to apply the security measure so there’s little reason not to. Save your changes and navigate to Security ->Firewall. Uncheck everything but Filter Multicast Firewall Protect: Disable At this point you can save your changes again, review the changes you’ve made to ensure they all stuck, and then deploy your “new” switch wherever it is needed. Advanced Router to Switch Configuration For the advanced configuration, you’ll need a copy of DD-WRT installed on your router. Although doing so is an extra few steps, it gives you a lot more control over the process and liberates an extra port on the device. Hook the Wi-Fi router up to the network via one of the LAN ports (later you can switch the cable to the WAN port). Open the administration control panel via web browser on the connected computer. Navigate to the Setup -> Basic Setup tab to get started. In the Basic Setup tab, ensure the following settings are adjusted. The setting changes are not optional and are required to turn the Wi-Fi router into a switch. WAN Connection Type: Disabled Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable In addition to disabling the DHCP server, also uncheck all the DNSMasq boxes as the bottom of the DHCP sub-menu. If you want to activate the extra port (and why wouldn’t you), in the WAN port section: Assign WAN Port to Switch [X] At this point the router has become a switch and you have access to the WAN port so the LAN ports are all free. Since we’re already in the control panel, however, we might as well flip a few optional toggles that further lock down the switch and prevent something odd from happening. The optional settings are arranged via the menu you find them in. Remember to save your settings with the save button before moving onto a new tab. While still in the Setup -> Basic Setup menu, change the following: Gateway/Local DNS : [IP address of primary router, e.g. 192.168.1.1] NTP Client : Disable The next step is to turn off the radio completely (which not only kills the Wi-Fi but actually powers the physical radio chip off). Navigate to Wireless -> Advanced Settings -> Radio Time Restrictions: Radio Scheduling: Enable Select “Always Off” There’s no need to create a potential security problem by leaving the Wi-Fi radio on, the above toggle turns it completely off. Under Services -> Services: DNSMasq : Disable ttraff Daemon : Disable Under the Security -> Firewall tab, uncheck every box except “Filter Multicast”, as seen in the screenshot above, and then disable SPI Firewall. Once you’re done here save and move on to the Administration tab. Under Administration -> Management:  Info Site Password Protection : Enable Info Site MAC Masking : Disable CRON : Disable 802.1x : Disable Routing : Disable After this final round of tweaks, save and then apply your settings. Your router has now been, strategically, dumbed down enough to plod along as a very dependable little switch. Time to stuff it behind your desk or entertainment center and streamline your cabling.     

    Read the article

  • Don’t be a dinosaur. Use Calendar Tree!

    - by jamiet
    If one spends long enough in my company one will likely eventually have to listen to me bark on about subscribable calendars. I was banging on about them way back in 2009, I’ve cajoled SQLBits into providing one, provided one myself for the World Cup, and opined that they could be transformative for the delivery of BI. I believe subscribable calendars can change the world but have never been good at elucidating why I thought so, for that reason I always direct people to read a blog by Scott Adams (yes, the guy who draws Dilbert) entitled Calendar as Filter. In that blog post Scott writes: I think the family calendar is the organizing principle into which all external information should flow. I want the kids' school schedules for sports and plays and even lunch choices to automatically flow into the home calendar. Everything you do has a time dimension. If you are looking for a new home, the open houses are on certain dates, and certain houses that fit your needs are open at certain times. If you are shopping for some particular good, you often need to know the store hours. Your calendar needs to know your shopping list and preferences so it can suggest good times to do certain things I think the biggest software revolution of the future is that the calendar will be the organizing filter for most of the information flowing into your life. You think you are bombarded with too much information every day, but in reality it is just the timing of the information that is wrong. Once the calendar becomes the organizing paradigm and filter, it won't seem as if there is so much. I wholly agree and hence was delighted to discover (via the Hanselminutes podcast) that Scott has a startup called CalendarTree.com whose raison d’etre is to solve this very problem. What better way to describe a Scott Adams startup than with a Scott Adams comic: I implore you to check out Calendar Tree and make the world a tiny bit better by using it to share any information that has a time dimension to it. Don’t be a dinosaur, use Calendar tree! @Jamiet

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >