Search Results

Search found 22427 results on 898 pages for 'opn program'.

Page 782/898 | < Previous Page | 778 779 780 781 782 783 784 785 786 787 788 789  | Next Page >

  • C# Minimize all running windows when application runs

    - by Derek
    I am working on a C# windows form application. How can i edit my code in a way that when more than 2 faces is being detected by my webcam. More information: When "FaceDetectedLabel.Text = "Faces Detected : " + cam.facesdetected.ToString();" becomes Face Detected: 2 or more... How can i do the following: Minimize all program running except my application. Log out of my computer Here is my code: namespace PBD { public partial class MainPage : Form { //declaring global variables private Capture capture; //takes images from camera as image frames public MainPage() { InitializeComponent(); } private void ProcessFrame(object sender, EventArgs arg) { Wrapper cam = new Wrapper(); //show the image in the EmguCV ImageBox WebcamPictureBox.Image = cam.start_cam(capture).Resize(390, 243, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC).ToBitmap(); FaceDetectedLabel.Text = "Faces Detected : " + cam.facesdetected.ToString(); } private void MainPage_Load(object sender, EventArgs e) { #region if capture is not created, create it now if (capture == null) { try { capture = new Capture(); } catch (NullReferenceException excpt) { MessageBox.Show(excpt.Message); } } #endregion Application.Idle += ProcessFrame; }

    Read the article

  • C++ data member initializer is not allowed

    - by user1435915
    I totally new to C++ so bear with me. I want to make a class with a static array, and access to this array from the main. Here is what i want to do in C#. namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Class a = new Class(); Console.WriteLine(a.arr[1]); } } } ===================== namespace ConsoleApplication1 { class Class { public static string[] s_strHands = new string[]{"one","two","three"}; } } Here is what i have tried: // justfoolin.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <string> #include <iostream> using namespace std; class Class { public: static string arr[3] = {"one", "two", "three"}; }; int _tmain(int argc, _TCHAR* argv[]) { Class x; cout << x.arr[2] << endl; return 0; } But i got: IntelliSense: data member initializer is not allowed

    Read the article

  • How to repeatedly show a Dialog with PyGTK / Gtkbuilder?

    - by Julian
    I have created a PyGTK application that shows a Dialog when the user presses a button. The dialog is loaded in my __init__ method with: builder = gtk.Builder() builder.add_from_file("filename") builder.connect_signals(self) self.myDialog = builder.get_object("dialog_name") In the event handler, the dialog is shown with the command self.myDialog.run(), but this only works once, because after run() the dialog is automatically destroyed. If I click the button a second time, the application crashes. I read that there is a way to use show() instead of run() where the dialog is not destroyed, but I feel like this is not the right way for me because I would like the dialog to behave modally and to return control to the code only after the user has closed it. Is there a simple way to repeatedly show a dialog using the run() method using gtkbuilder? I tried reloading the whole dialog using the gtkbuilder, but that did not really seem to work, the dialog was missing all child elements (and I would prefer to have to use the builder only once, at the beginning of the program). [SOLUTION] As pointed out by the answer below, using hide() does the trick. But one has to take care that the dialog is in fact destroyed if one does not catch its "delete-event". A simple example that works is: import pygtk import gtk class DialogTest: def rundialog(self, widget, data=None): self.dia.show_all() result = self.dia.run() def destroy(self, widget, data=None): gtk.main_quit() def closedialog(self, widget, data=None): self.dia.hide() return True def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.connect("destroy", self.destroy) self.dia = gtk.Dialog('TEST DIALOG', self.window, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT) self.dia.vbox.pack_start(gtk.Label('This is just a Test')) self.dia.connect("delete-event", self.closedialog) self.button = gtk.Button("Run Dialog") self.button.connect("clicked", self.rundialog, None) self.window.add(self.button) self.button.show() self.window.show() if __name__ == "__main__": testApp = DialogTest() gtk.main()

    Read the article

  • Broken console in Maven project using Netbeans

    - by Maciek Sawicki
    Hi, I have strange problem with my Neatens+Maven installation. This is the shortest code to reproduce the problem: public class App { public static void main( String[] args ) { // Create a scanner to read from keyboard Scanner scanner = new Scanner (System.in); Scanner s= new Scanner(System.in); String param= s.next(); System.out.println(param); } } When I'm running it as Maven Project inside Netbeans console seems to be broken. It just ignores my input. It's look like infinitive loop in System.out.println(param);. However this project works fine when it's compiled as "Java Aplication" project. It also works O.K. if I build and run it from cmd. System info: Os: Vista IDE: Netbeans 6.8 Maven: apache-maven-2.2.1 //edit Built program (using mavean from Netbeans) works fine. I just can't test it using Net beans. And I think I forgot to ask the question ;). So of course my first question is: how can I fix this problem? And second is: Is it any workaround for this? For example configuring Netbeans to run external commend line app instead of using built in console.

    Read the article

  • C# - retrieve file path from config file - @ doesn't do it's magic

    - by Bart
    Hi guys, I'm currently working on a web service that retrieves an XML message, archives it and then processes it further. The archive folder is read from the Web.config. This is what the archive method looks like private void Archive(System.Xml.XmlDocument xmlDocument) { try { string directory = System.Configuration.ConfigurationManager.AppSettings.Get("ArchivePath"); ParseMessage(xmlDocument); directory = string.Format(@"{0}\{1}\{2}", @directory, _senderService, DateTime.Now.ToString("MMMyyyy")); System.IO.Directory.CreateDirectory(directory); string Id = _messageID; string senderService = _senderService; xmlDocument.Save(directory + @"\" + DateTime.Now.ToString("yyyyMMdd_") + Id + "_" + System.Guid.NewGuid().ToString().Substring(0, 13) + ".xml"); } The path structure I retrieve is C:\Program Files\Subfolder\Subfolder. In the development, QA, UAT and PRD environments everything works fine. But on another machine I now need to install the web service on (which I cannot debug, unfortunately), the directory string is 'C:Files'. Just to be sure I double checked the .NET version on the different machines (I thought perhaps the usage of @ before a string was version-dependent); all machines use 2.0.50727. Does anyone recognize this problem? Thanks in advance!

    Read the article

  • Attempting to Convert Byte[] into Image... but is there platform issues involved

    - by user305535
    Greetings, Current, I'm attempting to develop an application that takes a Byte Array that is streamed to us from a Linux C language program across a TCPClient (stream) and reassemble it back into an image/jpg. The "sending" application was developed by a off-site developer who claims that the image reassembles back into an image without any problems or errors in his test environment (all Linux)... However, we are not so fortunate. I (believe) we successfully get all of the data sent, storing it as a string (lets us append the stream until it is complete) and then we convert it back into a Byte[]. This appears to be working fine... But, when we take the byte[] we get from the streaming (and our string assembly) and try to convert it into an image using the System.Drawing.Image.FromStream() we get errors.... Anyone have any idea what we're doing wrong? Or, does anyone know if this is a cross-platform issue? We're developing our app for Windows XP and C# .net, but the off-site developer did his work in c and Linux... perhaps there's some difference as to how each Operating System Coverts Images into Byte Arrays? Anyway, here's the code for converting our received ByteArray (from the TCPClient Stream) into an image. This code works when we send an image from a test machine we built that RUNS on XP, but not from the Linux box... System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] imageBytes = encoding.GetBytes(data); MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length); // Convert byte[] to Image ms.Write(imageBytes, 0, imageBytes.Length); System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false); <-- DIES here, throws a {System.ArgumentException: Parameter is not valid.} error Any advice, suggestions, theories, or HELP would be GREATLY appreciated! Please let me know??? Best wishes all! Thanks in advance! Greg

    Read the article

  • FreqTable with random values(#C)

    - by Sef
    Hello, I would like to make a frequency table with random numbers. So i have created a array that generates 11 random values between 0 and 9999. public void FillArrayRandom(int[] T) { Random Rndint = new Random(); for (int i=0; i < T.Length; i++) { T[i] = Rndint.Next(0, 9999); } }/*FillArrayRandom*/ The result i want is something alike this:(bar height up to 21) So this will be a constant. * * * * * * (the highest value will have the largest row/bar) * * * * 0 1 2 3 .....(index value's) 931 6669 10 8899 .... (up to 11 random values) My question is how do i exactly caculate the frequency between those 11 random values? The bars should have a relative relation with each other depending on there frequency. I would only like to use 1 single array in my program (for the generated values). F = (F * 21?) / ...? Really no clue how to obtain the proper results. If a frequency is =21 write * If a frequency is =20 write * If a frequency is =19 write * etc... Regards.

    Read the article

  • ASP.Net Gridview paging, pageindex always == 0.

    - by David Archer
    Hi all, Having a slight problem with my ASP.Net 3.5 app. I'm trying to get the program to pick up what page number has been clicked. I'm using ASP.Net's built in AllowPaging="True" function. It's never the same without code, so here it is: ASP.Net: <asp:GridView ID="GridView1" runat="server" CellPadding="4" ForeColor="#333333" GridLines="Vertical" Width="960px" AllowSorting="True" EnableSortingAndPagingCallbacks="True" AllowPaging="True" PageSize="25" > <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> C#: var fillTable = from ft in db.IncidentDatas where ft.pUserID == Convert.ToInt32(ClientList.SelectedValue.ToString()) select new { Reference = ft.pRef.ToString(), Date = ft.pIncidentDateTime.Value.Date.ToShortDateString(), Time = ft.pIncidentDateTime.Value.TimeOfDay, Premesis = ft.pPremises.ToString(), Latitude = ft.pLat.ToString(), Longitude = ft.pLong.ToString() }; if (fillTable.Count() > 0) { GridView1.DataSource = fillTable; GridView1.DataBind(); var IncidentDetails = fillTable.ToList(); for (int i = 0; i < IncidentDetails.Count(); i++) { int pageno = GridView1.PageIndex; int pagenostart = pageno * 25; if (i >= pagenostart && i < (pagenostart + 25)) { //Processing } } } Any idea why GridView1.PageIndex is always = 0? The thing is, the processing works correctly for the grid view.... it will always go to the correct paging page, but it's always 0 when I try to get the number. Help!

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • CGContextDrawImage returning bad access

    - by Marcelo
    Hello guys, I've been trying to blend two UIImage for about 2 days now and I've been getting some BAD_ACCESS errors. First of all, I have two images that have the same orientation, basically I'm using the CoreGraphics to do the blending. One curious detail, everytime I modify the code, the first time I compile and run it on device, I get to do everything I want without any sort of trouble. Once I restart the application, I get error and the program shuts down. Can anyone give me a light? I tried accessing the baseImage sizes dynamically, but it gives me bad access too. Here's a snippet of how I'm doing the blending. UIGraphicsBeginImageContext(CGSizeMake(320, 480)); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 0, 480); CGContextScaleCTM(context, 1.0, -1.0); CGContextDrawImage(context, rect, [baseImage CGImage]); CGContextSetBlendMode(context, kCGBlendModeOverlay); CGContextDrawImage(context, rect, [tmpImage CGImage]); [transformationView setImage:UIGraphicsGetImageFromCurrentImageContext()]; UIGraphicsEndImageContext();

    Read the article

  • Easy way to lock a file on a remote machine (windows)?

    - by roufamatic
    I've tracked down an error in my logs, and am trying to reproduce it. My theory is that a file sometimes gets locked in a specific folder, and when the application (ASP.NET) tries to delete that folder it hangs. I don't have the application running on my own machine so I'm debugging this on a remote server. But for the life of me, I can't seem to figure out a way to lock a file that prevents it from being deleted by the process. My first thought was to map the network path to a local drive and just leave a command prompt open to that folder. Locally that always fouls up my folder deletes, but apparently SMB is a bit more robust and doesn't grant me a lock. After that I created an infinte loop vbscript in the folder and executed it remotely. The file was deleted out from underneath the executing code. Man! I then tried creating a file on the server in that folder and removing all permissions. That didn't do the trick. I don't have access to the IIS settings so perhaps it's running under a privileged system account. So: what's a program that you know is free and I can quickly use to create an exclusive lock on a file so I can test my delete theory? Like a really, really bad Notepad clone or something. :-)

    Read the article

  • How do you parse the XDG/gnome/kde menu/desktop item structure in c++??

    - by Joe Soul-bringer
    I would like to parse the menu structure for Gnome Panels (the standard Gnome Desktop application launcher) and it's KDE equivalent using c/c++ function calls. That is, I'd like a list of what the base menu categories and submenu are installed in a given machine. I would like to do with using fairly simple c/c++ function calls (with NO shelling out please). I understand that these menus are in the standard xdg format. I understand that this menu structure is stored in xml files such as: /home/user/.config/menus/applications.menu I've look here: http://www.freedesktop.org/wiki/Specifications/menu-spec?action=show&redirect=Standards%2Fmenu-spec but all they offer is the standard and some shell files to insert item entries (I don't want shell scripts, I don't want installation, I definitely don't want to create a c-library from the XDG specification. I want to find the existing menu structure). I've looked here: http://library.gnome.org/admin/system-admin-guide/stable/menustructure-13.html.en for more notes on these structures. None of this gives me a good idea of how determine the menu structures using a c/c++ program. The actual gnome menu structures seem to be a horrifically hairy things - they don't seem to show the menu structure but to give an XML-coded description of all the changes that the menus have gone through since installation. I assume gnome panels parses these file so there's a function buried somewhere to do this but I've yet to find where that function is after scanning library.gnome.org for a couple of days. I've scanned the Nautilus source code as well but Panels seem to exist elsewhere or are burried well. Thanks in advance

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • In Python, how can I find the index of the first item in a list that is NOT some value?

    - by Ryan B. Lynch
    Python's list type has an index(x) method. It takes a single parameter x, and returns the (integer) index of the first item in the list that has the value x. Basically, I need to invert the index(x) method. I need to get the index of the first value in a list that does NOT have the value x. I would probably be able to even just use a function that returns the index of the first item with a value != None. I can think of a 'for' loop implementation with an incrementing counter variable, but I feel like I'm missing something. Is there an existing method, or a one-line Python construction that can handle this? In my program, the situation comes up when I'm handling lists returned from complex regex matches. All but one item in each list have a value of None. If I just needed the matched string, I could use a list comprehension like '[x for x in [my_list] if x is not None]', but I need the index in order to figure out which capture group in my regex actually caused the match.

    Read the article

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • F# numeric associations

    - by b1g3ar5
    I have a numeric association for a custom type as follows: let DiffNumerics = { new INumeric<Diff> with member op.Zero = C 0.0 member op.One = C 1.0 member op.Add(a,b) = a + b member op.Subtract(a,b) = a - b member op.Multiply(a,b) = a * b member ops.Negate(a) = Diff.negate a member ops.Abs(a) = Diff.abs a member ops.Equals(a, b) = ((=) a b) member ops.Compare(a, b) = Diff.compare a b member ops.Sign(a) = int (Diff.sign a).Val member ops.ToString(x,fmt,fmtprovider) = failwith "not implemented" member ops.Parse(s,numstyle,fmtprovider) = failwith "not implemented" } GlobalAssociations.RegisterNumericAssociation(DiffNumerics) It works fine in f# interactive, but crashes when I run, because .ElementOps is not filled correctly for a matrix of these types. Any ideas why this might be? EDIT: In fsi, the code let A = dmatrix [[Diff.C 1.;Diff.C 2.;Diff.C 3.];[Diff.C 4.;Diff.C 5.;Diff.C 6.]] let B = matrix [[1.;2.;3.];[4.;5.;6.]] gives: > A.ElementOps;; val it : INumeric<Diff> = FSI_0003.NewAD+DiffNumerics@258 > B.ElementOps;; val it : INumeric<float> = Microsoft.FSharp.Math.Instances+FloatNumerics@115 > in the debugger A.ElementOps shows: '(A).ElementOps' threw an exception of type 'System.NotSupportedException' and, for the B matrix: Microsoft.FSharp.Math.Instances+FloatNumerics@115 So somehow the DiffNumerics isn't making it to the compiled program.

    Read the article

  • Delphi TRttiType.GetMethods return zero TRttiMethod instances

    - by conciliator
    I've recently been able to fetch a TRttiType for an interface using TRttiContext.FindType using Robert Loves "GetType"-workaround ("registering" the interface by an explicit call to ctx.GetType, e.g. RType := ctx.GetType(TypeInfo(IMyPrettyLittleInterface));). One logical next step would be to iterate the methods of said interface. Consider program rtti_sb_1; {$APPTYPE CONSOLE} uses SysUtils, Rtti, mynamespace in 'mynamespace.pas'; var ctx: TRttiContext; RType: TRttiType; Method: TRttiMethod; begin ctx := TRttiContext.Create; RType := ctx.GetType(TypeInfo(IMyPrettyLittleInterface)); if RType <> nil then begin for Method in RType.GetMethods do WriteLn(Method.Name); end; ReadLn; end. This time, my mynamespace.pas looks like this: IMyPrettyLittleInterface = interface ['{6F89487E-5BB7-42FC-A760-38DA2329E0C5}'] procedure SomeProcedure; end; Unfortunately, RType.GetMethods returns a zero-length TArray-instance. Are there anyone able to reproduce my troubles? (Note that in my example I've explicitly fetched the TRttiType using TRttiContext.GetType, not the workaround; the introduction is included to warn readers that there might be some unresolved issues regarding rtti and interfaces.) Thanks!

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • How do i write task? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • Different behaviour of method overloading in C#

    - by Wondering
    Hi All, I was going through C# Brainteasers(http://www.yoda.arachsys.com/csharp/teasers.html) and came accross one question:what should be the o/p of below code class Base { public virtual void Foo(int x) { Console.WriteLine ("Base.Foo(int)"); } } class Derived : Base { public override void Foo(int x) { Console.WriteLine ("Derived.Foo(int)"); } public void Foo(object o) { Console.WriteLine ("Derived.Foo(object)"); } } class Test { static void Main() { Derived d = new Derived(); int i = 10; d.Foo(i); // it prints ("Derived.Foo(object)" } } but if I change the code to enter code here class Derived { public void Foo(int x) { Console.WriteLine("Derived.Foo(int)"); } public void Foo(object o) { Console.WriteLine("Derived.Foo(object)"); } } class Program { static void Main(string[] args) { Derived d = new Derived(); int i = 10; d.Foo(i); // prints Derived.Foo(int)"); Console.ReadKey(); } } I want to why the o/p is getting changde when we are inheriting vs not inheriting , why method overloading is behaving differently in both the cases

    Read the article

  • VB.NET trying simple captcha

    - by Pride Grimm
    I'm trying to write a simple captcha program in vb.net. I'm just wanting to make an image from random numbers and display it, check the answer, and then proceed. I'm pretty new to vb.net, so I found some code to generate the information. I will cite the owner when I find it again (http://www.knowlegezone.com/80/article/Technology/Software/Asp-Net/Simple-ASP-NET-CAPTCHA-Tutorial) This is in the onload() of default2.aspx Public Sub returnNumer() Dim num1 As New Random Dim num2 As New Random Dim numQ1 As Integer Dim numQ2 As Integer Dim QString As String numQ1 = num1.Next(10, 15) numQ2 = num2.Next(17, 31) QString = numQ1.ToString + " + " + numQ2.ToString + " = " Session("answer") = numQ1 + numQ2 Dim bitmap As New Bitmap(85, 35) Dim Grfx As Graphics = Graphics.FromImage(bitmap) Dim font As New Font("Arial", 18, FontStyle.Bold, GraphicsUnit.Pixel) Dim Rect As New Rectangle(0, 0, 100, 50) Grfx.FillRectangle(Brushes.Brown, Rect) Grfx.DrawRectangle(Pens.PeachPuff, Rect) ' Border Grfx.DrawString(QString, font, Brushes.Azure, 0, 0) Response.ContentType = "Image/jpeg" bitmap.Save(Response.OutputStream, System.Drawing.Imaging.ImageFormat.Jpeg) bitmap.Dispose() Grfx.Dispose() End Sub So I put this in a separate page, like this This all works find and dandy, but when I get the answer from the session like this Dim literal As String = Convert.ToString(Session("answer")) It's always one behind. So if The images adds to 32, the answer in session isn't 32. But after a refresh (and a new image) the session("answer") will be 32. Is there a way to refresh the session on page 1, after the default2.aspx loads? Is there a better way to do this? I though about trying to run the code all on one page, and trying to set the src of and image to returnNumber(), but I need a bit of help on that one.

    Read the article

  • Avoiding a fork()/SIGCHLD race condition

    - by larry
    Please consider the following fork()/SIGCHLD pseudo-code. // main program excerpt for (;;) { if ( is_time_to_make_babies ) { pid = fork(); if (pid == -1) { /* fail */ } else if (pid == 0) { /* child stuff */ print "child started" exit } else { /* parent stuff */ print "parent forked new child ", pid children.add(pid); } } } // SIGCHLD handler sigchld_handler(signo) { while ( (pid = wait(status, WNOHANG)) > 0 ) { print "parent caught SIGCHLD from ", pid children.remove(pid); } } In the above example there's a race-condition. It's possible for "/* child stuff */" to finish before "/* parent stuff */" starts which can result in a child's pid being added to the list of children after it's exited, and never being removed. When the time comes for the app to close down, the parent will wait endlessly for the already-finished child to finish. One solution I can think of to counter this is to have two lists: started_children and finished_children. I'd add to started_children in the same place I'm adding to children now. But in the signal handler, instead of removing from children I'd add to finished_children. When the app closes down, the parent can simply wait until the difference between started_children and finished_children is zero. Another possible solution I can think of is using shared-memory, e.g. share the parent's list of children and let the children .add and .remove themselves? But I don't know too much about this. EDIT: Another possible solution, which was the first thing that came to mind, is to simply add a sleep(1) at the start of /* child stuff */ but that smells funny to me, which is why I left it out. I'm also not even sure it's a 100% fix. So, how would you correct this race-condition? And if there's a well-established recommended pattern for this, please let me know! Thanks.

    Read the article

  • Safari doesn't detect my Extension Certificate

    - by Questor
    Hello, all! I have registered for the Safari Development Program and have a valid Apple ID. I've followed all the steps given by Apple. The problem is that Windows XP (Service Pack 2) does not recognize the command 'certreq', whereas the instructions said it would work on any Windows machine. However, the command 'certreq' was working on Windows Vista on the machine of my co-worker's, I downloaded the certificate (the .cer file) and installed it and Safari detected it. However, I don't have Windows Vista. I installed Windows 7 now on my machine, the command 'certreq' works and I have the Safari Extension Certificate (the .cer file) but when I open Safari's Extension Builder, my certificate does not appear there. I entered mmc in Start -- Run and checked if the certificate was installed there. It was in the 'Other People' but not in 'Personal'. Even on Internet Explorer 7+, when I go to Tools -- Internet Options -- Content (Tab) -- Certificates, the certificate is not there in the Personal tab, (WHEREAS IT GOT INSTALLED IN THE PERSONAL FOLDER AUTOMATICALLY IN WINDOWS VISTA). I tried importing the certificate (the .cer file) into the Peronal folder, the import is successful but still neither does it appear in the personal folder nor does Safari recognize/detect it when I go to the Extension Builder. ANY HELP?! I need to make an extension for my office project and the deadline is approaching. I really need to get it done. Thanks a million in anticipation.

    Read the article

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

< Previous Page | 778 779 780 781 782 783 784 785 786 787 788 789  | Next Page >