Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 268/320 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • preg_match_all and newlines inside quotes

    - by David
    Another noob regex problem/question. I'm probably doing something silly so I thought I'd exploit the general ingenuity of the SO regulars ;) Trying to match newlines but only if they occur within either double quotes or single quotes. I also want to catch strings that are between quotes but contain no newlines. Okay so there's what i got, with output. Below that, will be the output I would like to get. Any help would be greatly appreciated! :) I use Regex Coach to help me create my patterns, being a novice and all. According to RC, The pattern I supply does match all occurances within the data, but in my PHP, it skips over the multi-line part. I have tried with the 'm' pattern modifier already, to no avail. Contents of $CompressedData: <?php $Var = "test"; $Var2 = "test2"; $Var3 = "blah blah blah blah blah blah blah blah blah"; $Var4 = "hello"; ?> Pattern / Code: preg_match_all('!(\'|")(\b.*\b\n*)*(\'|")!', $CompressedData, $Matches); Current print_r output of $Matches: Array ( [0] => Array ( [0] => "test" [1] => "test2" [2] => "hello" ) ... } DESIRED print_r output of $Matches: Array ( [0] => Array ( [0] => "test" [1] => "test2" [2] => "blah blah blah blah blah blah blah blah blah" [3] => "hello" ) ... }

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Json HTTP Module stream issue

    - by Justin
    Hey, I have an HTTP Module that I use to clean up the JSON returned by my web service (see http://www.codeproject.com/KB/webservices/ASPNET_JSONP.aspx?msg=3400287#xx3400287xx for an example of this.) Basically it relates to calling cross-domain JSON web services from javascript. There is this JsonHttpModule which uses a JsonResponseFilter Stream class to write out the JSON and the overloaded Write method is supposed to wrap the name of the callback function around the JSON, otherwise the JSON errors out as needing a label. However, if the JSON is really long, the Write method in the Stream class is called multiple times, causing the callback function to incorrectly get inserted midway through the JSON. Is there a way in the Stream class to wrap the callback function around the stream at the end or to specify that it write all of the JSON in 1 Write method instead of in chunks?? Here's where it calls the JsonResponseFilter in the JsonHttpModule: public void OnReleaseRequestState(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; if (!_Apply(app.Context.Request)) return; // apply response filter to conform to JSONP app.Context.Response.Filter = new JsonResponseFilter(app.Context.Response.Filter, app.Context); } Here's the Write method in the JsonResponseFilter Stream class that gets called multiple times: public override void Write(byte[] buffer, int offset, int count) { var b1 = Encoding.UTF8.GetBytes(_context.Request.Params["callback"] + "("); _responseStream.Write(b1, 0, b1.Length); _responseStream.Write(buffer, offset, count); var b2 = Encoding.UTF8.GetBytes(");"); _responseStream.Write(b2, 0, b2.Length); } Thanks for any help! Justin

    Read the article

  • MySql scoping problem with correlated subqueries

    - by Rolf
    Hi, I'm having this Mysql query, It works: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = 37) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = 37) ) aSq ) areas FROM m3allems m WHERE m.id = 37 The result is: nom prenom categories areas Man Multi Carpentry,Paint,Walls Beirut,Baalbak,Saida It works correclty, but only when i hardcode into the query the id that I want (37). I want it to work for all entries in the m3allem table, so I try this: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = m.id) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = m.id) ) aSq ) areas FROM m3allems m And I get an error: Unknown column 'm.id' in 'where clause' Why? From the MySql manual: 13.2.8.7. Correlated Subqueries [...] Scoping rule: MySQL evaluates from inside to outside. So... do this not work when the subquery is in a SELECT section? I did not read anything about that. Does anyone know? What should I do? It took me a long time to build this query... I know it's a monster query but it gets what I want in a single query, and I am so close to getting it to work! Can anyone help?

    Read the article

  • Autocompleting \cite{} with emacs + auctex gives "cite: no such database entry"

    - by Alejandro Weinstein
    Hi: I am running Emacs 23.1.1 and AucTeX 11.85 in an Ubuntu 8.10 machine. After opening a tex file, the first time I try to use the autocompletion of the \cite{} command, I get "cite: info not available, use `C-c &' to get it." in the minibuffer. After doing the 'C-c &', I get "byte-code: No BibTeX entry with citation key". Subsequent calls to \cite gives me the message "cite: no such database entry" . I have a \bibliography{library} in my tex file, and the \cite{} entries that I did manually work as expected. I have the following in my .emacs (require 'reftex) (setq-default TeX-master nil) (add-hook 'LaTeX-mode-hook 'TeX-PDF-mode) ;turn on pdf-mode. AUCTeX ;will call pdflatex to ;compile instead of latex. (add-hook 'LaTeX-mode-hook 'LaTeX-math-mode) ;turn on math-mode by ;default (add-hook 'LaTeX-mode-hook 'reftex-mode) ;turn on REFTeX mode by ;default (add-hook 'LaTeX-mode-hook 'flyspell-mode) ;turn on flyspell mode by ;default (setq reftex-plug-into-AUCTeX t) (setq TeX-auto-save t) (setq TeX-save-query nil) (setq TeX-parse-self t) (setq-default TeX-master nil) I also tried the suggestions in http://stackoverflow.com/questions/2699017/suggestion-for-cite-in-emacs-with-auctex, but it didn't work either. Alejandro.

    Read the article

  • Java Arrays.equals() returns false for two dimensional arrays.

    - by Achilles
    Hi there, I was just curious to know - why does Arrays.equals(double[][], double[][]) return false? when in fact the arrays have the same number of elements and each element is the same? For example I performed the following test. ` [java] double[][] a, b; int size =5; a=new double[size][size]; b=new double[size][size]; for( int i = 0; i < size; i++ ) for( int j = 0; j < size; j++ ){ a[i][j]=1.0; b[i][j]=1.0; } if(Arrays.equals(a, b)) System.out.println("Equal"); else System.out.println("Not-equal"); [/java] ` Returns false and prints "Not-equal. on the other hand, if I have something like this: [java] double[] a, b; int size =5; a=new double[size]; b=new double[size]; for( int i = 0; i < size; i++ ){ a[i]=1.0; b[i]=1.0; } if(Arrays.equals(a, b)) System.out.println("Equal"); else System.out.println("Not-equal"); } [/java] returns true and prints "Equal". Does the method only work with single dimensions? if so, is there something similar for multi-dimensional arrays in Java?

    Read the article

  • Fun things you can do by mutating Java strings

    - by polygenelubricants
    So I've come around since I asked how to limit setAccessible to only “legitimate” uses and have come to embrace its power for fun. Enabled by its power, of course, is string mutation. import java.lang.reflect.Field; public class Mutator { static void mutate(Object obj, String field, Object newValue) { try { Field f = obj.getClass().getDeclaredField(field); f.setAccessible(true); f.set(obj, newValue); } catch (Exception e) { } } public static void mutate(String from, String to) { mutate(from, "value", to.toCharArray()); mutate(from, "count", to.length()); } public static void main(String args[]) { Mutator.mutate(System.getProperty("line.separator"), "<br/>\n"); System.out.println("Hello world!"); Mutator.mutate(Integer.toString(Integer.MIN_VALUE), "OMG!"); System.out.println(-2147483648); Mutator.mutate(String.valueOf((Object) null), "LOL!"); System.out.println(Arrays.toString(new int[3][])); Mutator.mutate(Arrays.toString(new int[0]), ":("); System.out.println(Arrays.toString(new byte[0])); } } Output (if no exception is thrown): Hello world!<br/> OMG!<br/> [LOL!, LOL!, LOL!]<br/> :(<br/> Let's see what other fun things we can come up with.

    Read the article

  • Where and how to start on C# and .Net Framework ?

    - by Rachel
    Currently, I have been working as an PHP developer for approximately 1 year now and I want learn about C# and .Net Framework, I do not have any experience with .Net Framework and C# and also there is not firm basis as to why I am going for C# and .Net Framework vs Java or any other programming languages, this decision is mere on career point of view and job opportunities. So my question is about: Is my decision wise to go for C# and .Net Framework route after working for sometime as an PHP Developer ? What are the good resources which I can refer and learn from to get knowledge on C# and .NET Framework ? How should I go about learning on C# and .NET Framework ? What all technologies should I be learning OR have experience with to be considered as an C#/.Net Developer, I am mentioning some technologies, please add or suggest one, if am missing out any ? Technologies C#-THE LANGUAGE GUI APPLICATION DEVELOPMENT WINDOWS CONTROL LIBRARY DELEGATES DATA ACCESS WITH ADO.NET MULTI THREADING ASSEMBLIES WINDOWS SERVICES VB INTRODUCTION TO VISUAL STUDIO .NET WINDOWS CONTROL LIBRARY DATA ACCESS WITH ADO.NET ASP.NET WEB TECHNOLOGIES CONTROLS VALIDATION CONTROL STATE MANAGEMENT CACHING ASP.NET CONFIGURATION ADO.NET ASP.NET TRACING & SECURITY IN ASP.NET XMLPROGRAMMING WEB SERVICES CRYSTAL REPORTS SSRS (SQL Server Reporting Services) MS-Reports LINQ: NET Language-Integrated Query NET Language-Integrated Query LINQ to SQL: SQL Integration WCF: Windows Communication Foundation What Is Windows Communication Foundation? Fundamental Windows Communication Foundation Concepts Windows Communication Foundation Architecture WPF: Windows Presentation Foundation Getting Started (WPF) Application Development WPF Fundamentals What are your thoughts, suggestions on this and from Job and Market Perspective, what areas of C#/.Net Development should I put my focus on ? I know this is very subjective and long question but advice would be highly appeciated. Thanks.

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • Linking problems using libcurl with Visual C++ 2005: "unresolved external symbol __imp__curl_easy_se

    - by user88595
    Hi, I am planning to use libcurl in my project. I had downloaded the library source,built and integrated it in a small POC application. I am able to build and run the application without any issues with the generated libcurl.dll and libcurl_imp.lib files. Now when I integrate the same library in my project I am getting linker errors. 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_setopt 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_perform 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_cleanup 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_global_init 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_init I have researched and tried all manners of workarounds like adding CURL_STATICLIB definitions , additional libraries , changing to /MT even copying the libs to the release directory but nothing seems to work. As far as I can see the only difference between approach #1 and #2 in my steps are #1 is an console application using the libcurl.dll while in my main project this is another dll which is trying to link to libcurl.dll.. Would that necessitate any change in approach? Can I use the same generated multi threaded DLL /MD file for both(Tried /MT also with no success)? Any other ideas? Following are the linker options. -------------------------------------------------Working------------------------------------------------- /OUT:"C:\SampleFTP\Release\SampleFTP.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\SampleFTP\SampleFTP\Release" /MANIFEST /MANIFESTFILE:"Release\SampleFTP.exe.intermediate.manifest" /DEBUG /PDB:"c:\SampleFTP\release\SampleFTP.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /LTCG /MACHINE:X86 /ERRORREPORT:PROMPT libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib -------------------------------------------------Working------------------------------------------------- ----------------------------------------------NotWorking------------------------------------------------- /OUT:".......\nt\Win32\Release/foo__tests.dll" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\FullLibPath\libcurl_libs" /LIBPATH:"......\nt\Win32\Release" /DLL /MANIFEST /MANIFESTFILE:".\foo_tests\Win32\Release\foo_tests.dll.intermediate.manifest" /DEBUG /PDB:".......\nt\Win32\Release/foo_tests.pdb" /OPT:REF /OPT:ICF /LTCG /IMPLIB:".......\nt\Win32\Release/foo_tests.lib" /MACHINE:X86 /ERRORREPORT:PROMPT odbc32.lib odbccp32.lib util_process.lib wsock32.lib Version.lib libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib "......\nt\win32\release\otherlib1.lib" "......\nt\win32\release\otherlib2.lib" ----------------------------------------------NotWorking-------------------------------------------------

    Read the article

  • Changing the system time zone succeeds once and then no longer changes

    - by Adam Driscoll
    I'm using the WinAPI to set the time zone on a Windows XP SP3 box. I'm reading the time zone information from the HKLM\Software\Microsoft\WindowsNT\Time Zones\<time zone name> key and then setting the time zone to the specified time zone. I enumerate the keys under the Time Zones key, grab the TZI value and stuff it into a TIME_ZONE_INFORMATION struct to be passed to SetTimeZoneInformation. All seems to work on the first pass. The time zone changes, no error is returned. The second time I perform this operation (same user, new session, on login before userinit) the call succeeds but the system does not reflect the time zone change. Neither the clock nor time stamps on files are updated to the new time zone. When I navigate to: HKLM\System\CurrentControlSet\Control\TimeZoneInformation my new time zone information is present. A couple strange things are happening when I'm setting my time zone: Also when I parse the TZI binary value from the registry to store in my TIME_ZONE_INFORMATION struct I'm noticing the struct has the DaylightDate.wDay and StandardDate.wDay field always set to 0 I tried to call GetTimeZoneInformation right after I call SetTimeZoneInformation but the call fails with a 1300 error (Not all privileges or groups referenced are assigned to the caller. ) I'm also making sure to send a WM_BROADCAST message so Explorer knows whats going on. Think it's the parsing of the byte array to the TIME_ZONE_INFORMATION struct? Or am I missing some thing else important? EDIT: Found a document stating why this is happening: here. Privilege was introduced in Vista...thanks MSDN docs... Per the Microsoft documentation I'm enabling the SE_TIME_ZONE_NAME privilege for the current processes token. But when I attempt to call LookupPriviledgeValue for SE_TIME_ZONE_NAME I get a 1313 error (A specified privilege does not exist. ).

    Read the article

  • Rolling Back a Transaction with MySQL Connector in VB.net

    - by Jonathan
    Hey all- I have one multi-row INSERT statement (300 or so sets of values) that I would like to commit to the MySQL database in an all-or-nothing fashion. insert into table VALUES (1, 2, 3), (4, 5, 6), (7, 8, 9); In some cases, a set of values in the command will not meet the criteria of the table (duplicate key, for example). When that happens I do not want any of the previous sets added to the database. I've implemented this with the following code, however, my rollback command doesn't appear to be making a difference. I've used this documentation: http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqltransaction.html Dim transaction As MySqlTransaction = sqlConnection.BeginTransaction() sqlCommand = New MySqlCommand(insertStr, sqlConnection, transaction) Try sqlCommand.ExecuteNonQuery() Catch ex As Exception writeToLog("EXCEPTION: " & ex.Message & vbNewLine) writeToLog("Could not execute " & sqlCmd & vbNewLine) Try transaction.Rollback() writeToLog("All statements were rolled back." & vbNewLine) Return False Catch rollbackEx As Exception writeToLog("EXCEPTION: " & rollbackEx.Message & vbNewLine) writeToLog("All statements were not rolled back." & vbNewLine) Return False End Try End Try transaction.commit() I get the DUPLICATE KEY exception thrown, no Rollback Exception thrown, and every set of values up to duplicate key committed to the database. What am I doing wrong? Thanks- Jonathan

    Read the article

  • Display post data !

    - by Comii
    I am trying to post data from vb.net application to web service asmx that is located on server! For posting data from vb.net application I am using this code: Public Function Post(ByVal url As String, ByVal data As String) As String Dim vystup As String = Nothing Try 'Our postvars Dim buffer As Byte() = Encoding.ASCII.GetBytes(data) 'Initialisation, we use localhost, change if appliable Dim WebReq As HttpWebRequest = DirectCast(WebRequest.Create(url), HttpWebRequest) 'Our method is post, otherwise the buffer (postvars) would be useless WebReq.Method = "POST" 'We use form contentType, for the postvars. WebReq.ContentType = "application/x-www-form-urlencoded" 'The length of the buffer (postvars) is used as contentlength. WebReq.ContentLength = buffer.Length 'We open a stream for writing the postvars Dim PostData As Stream = WebReq.GetRequestStream() 'Now we write, and afterwards, we close. Closing is always important! PostData.Write(buffer, 0, buffer.Length) PostData.Close() 'Get the response handle, we have no true response yet! Dim WebResp As HttpWebResponse = DirectCast(WebReq.GetResponse(), HttpWebResponse) 'Let's show some information about the response Console.WriteLine(WebResp.StatusCode) Console.WriteLine(WebResp.Server) 'Now, we read the response (the string), and output it. Dim Answer As Stream = WebResp.GetResponseStream() Dim _Answer As New StreamReader(Answer) 'Congratulations, you just requested your first POST page, you 'can now start logging into most login forms, with your application 'Or other examples. vystup = _Answer.ReadToEnd() Catch ex As Exception MessageBox.Show(ex.Message) End Try Return vystup.Trim() & vbLf End Function Now how i can retrieve this data in asmx service?

    Read the article

  • how to initialize and implement the matrix inside the function in objective-C?

    - by Rajendra Bhole
    Hi, I want to develop an application in which i want to be initialize the matrix for manipulation. The code as follows, struct pixel { Byte r, g, b,a; int count; }; (NSInteger) processImage1: (UIImage*) image { struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel)); if (pixels != nil) { // Create a new bitmap CGContextRef context = CGBitmapContextCreate( (void*) pixels, image.size.width, image.size.height, 8, image.size.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast ); if (context != NULL) { // Draw the image in the bitmap CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage); NSUInteger numberOfPixels = image.size.width * image.size.height; while (numberOfPixels &gt; 0) { if (pixels->r == 254 || pixels->g == 77 || pixels->b==254) { numberOfRedPixels++; } pixels++; numberOfPixels--; } CGContextRelease(context); } free(pixels); } return 1; } I want to implement the matrix inside the function of - (NSInteger) processImage1: (UIImage*) image {} The matrix should have be row = image.size.width and column = image.size.height.

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • how to make a name from random numbers?

    - by blood
    my program makes a random name that could have a-z this code makes a 16 char name but :( my code wont make the name and idk why :( can anyone show me what's wrong with this? char name[16]; void make_random_name() { byte loop = -1; for(;;) { loop++; srand((unsigned)time(0)); int random_integer; random_integer = (rand()%10)+1; switch(random_integer) { case '1': name[loop] = 'A'; break; case '2': name[loop] = 'B'; break; case '3': name[loop] = 'C'; break; case '4': name[loop] = 'D'; break; case '5': name[loop] = 'E'; break; case '6': name[loop] = 'F'; break; case '7': name[loop] = 'G'; break; case '8': name[loop] = 'Z'; break; case '9': name[loop] = 'H'; break; } cout << name << "\n"; if(loop > 15) { break; } } }

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • Request for the permission of type 'System.Web.AspNetHostingPermission' when compiling web site

    - by ahsteele
    I have been using Windows 7 for a while but have not had to work with a particular legacy intranet application since my upgrade. Unfortunately, this application is setup as an ASP.NET Website project hosted on a remote server. When I have the website open in Visual Studio 2008 and try to debug it: Request for the permission of type 'System.Web.AspNetHostingPermission' failed To resolve this issue on Windows Vista machines I would change the machine's .NET Security Configuration to trust the local intranet. I believe this configuration utility relied upon the mscorcfg.msc which from some cursory research appears to be apart of the .NET 2.0 SDK. I have tried to follow the instructions from this Microsoft Support article running the command below to no avail. Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on Presently, I have the following .NET and ASP.NET components installed on my machine Microsoft .NET Compact Framework 2.0 SP2 Microsoft .NET Compact Framework 3.5 Microsoft .NET Framework 4 Client Profile Microsoft .NET Framework 4 Extended Microsoft .NET Framework 4 Multi-Targeting Pack Microsoft ASP.NET MVC 1.0 Microsoft ASP.NET MVC 2 Microsoft ASP.NET MVC 2 - Visual Studio 2008 Tools Microsoft ASP.NET MVC 2 - Visual Studio 2010 Tools Do I need to install the .NET 2.0 SDK? Am I issuing the caspol command incorrectly? Is there something else that I am missing?

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

  • Encryption puzzle / How to create a ProxyStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documenation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the ProxyStub looks like.

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >