Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 81/861 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • How to implement progressbar(to show progress) using threading concept in win 32?

    - by Rakesh
    I am trying to show a progress bar while my process is going on...in my application there will be a situation where I gotta read files and manipulate them(it will take some time to complete)..want to display a progress bar during this operation..the particular function I am calling is an win 32 ...so if you check my code below i am upto the point of creating the progress bar in a dialog window and creating a thread Now I dont know how to post the message and where to get the message and handle...Please help me..thanks in advance //my function int Myfunction(....) { HWND dialog = CreateWindowEx(0,WC_DIALOG,L"Proccessing...",WS_OVERLAPPEDWINDOW|WS_VISIBLE, 600,300,280,120,NULL,NULL,NULL,NULL); HWND pBar = CreateWindowEx(NULL,PROGRESS_CLASS,NULL,WS_CHILD|WS_VISIBLE,40,20,200, 20, dialog,(HMENU)IDD_PROGRESS,NULL,NULL); HANDLE getHandle = CreateThread(NULL,NULL,(LPTHREAD_START_ROUTINE)SetFilesForOperation(...), NULL,NULL,0); } LPARAM SetFilesForOperation(...) { for(int index = 0;index < noOfFiles; index++) { *checkstate = *(checkState + index); if(*checkstate == -1) { *(getFiles+i) = new TCHAR[MAX_PATH]; wcscpy(*(getFiles+i),*(dataFiles +index)); i++; } else { (*tempDataFiles)->Add(*(dataFiles+index)); *(checkState + localIndex) = *(checkState + index); localIndex++; } //SendMessage(pBar,PBM_SETSTEP,1,0); } }

    Read the article

  • How to set permissions or alter a git commit process when using local repositories

    - by Tony
    I have a server that contains a central git repository and one of my co-worker's development environment. My co-worker's repository's origin is the central git repository and he pushes there when he has some code to share. Likewise, I develop locally and push to the central git repository when I have some code to share, so my repository's origin is also the central git repository. The issue is that I have the central git repository under a "git" user's home directory. So when I push I am actually SSH'ing into the the server as the "git" user. To be even more clear, my config has these lines: $ more .git/config [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = [email protected]:fsg [branch "master"] remote = origin merge = refs/heads/master When I push, git handles this SSH + push seamlessly with I am guessing some sort of git shell. The issue is that when my coworker pushes, he is logged in as himself for a user and gets a bunch of crazy permission errors. Is there a typical way to solve this problem without opening up git's directories to a group? I think this will be problematic when I push and therefore overwrite the the repository and those permissions are reset. Thanks!

    Read the article

  • to understand the code- how the heap is written in process migration in solaris

    - by akshay
    hi guys i need help understanding what this piece of code actually does as it is a part of my project i am stuck here. the code is from libckpt on solaris. /********************************** * function: write_heap * args: map_fd -- file descriptor for map file * data_fd -- file descriptor for data file * returns: no. of chunks written on success, -1 on failure * side effects: writes all included segments of the heap to ckpt files * misc.: If we are forking and copyonwrite is set, we will write the heap from bottom to top, moving the brk pointer up each time so that we don't get a page copied if the * called from: take_ckpt() ***********************************/ static int write_heap(int map_fd, int data_fd) { Dlist curptr, endptr; int no_chunks=0, pn; long size; caddr_t stop, addr; if(ckptflags.incremental){ /-- incremental checkpointing on? --/ endptr = ckptglobals.inc_list-main-flink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->blink->blink; curptr != endptr; curptr = curptr->blink){ /*-- write out the last page in the included chunk --*/ stop = curptr->addr; pn = ((long)curptr->stop - (long)sys.DATASTART) / PAGESIZE; if(isdirty(pn)){ addr = (caddr_t)max((long)curptr->addr, (long)((pn * PAGESIZE) + sys.DATASTART)); size = (long)curptr->stop - (long)addr; debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+size, pn); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } /*-- write out all the whole pages in the middle of the chunk --*/ for(pn--; pn * PAGESIZE + sys.DATASTART >= stop; pn--){ if(isdirty(pn)){ addr = (caddr_t)((pn * PAGESIZE) + sys.DATASTART); debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+PAGESIZE, pn); if(write_chunk(addr, PAGESIZE, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } /*-- write out the first page in the included chunk --*/ addr = curptr->addr; size = ((pn+1) * PAGESIZE + sys.DATASTART) - addr; if(size > 0 && (isdirty(pn))){ debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x\n", addr, addr+size); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } } else{ /-- incremental checkpointing off! --/ endptr = ckptglobals.inc_list-main-blink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->flink->flink; curptr != endptr; curptr = curptr->flink){ debug(stderr, "DEBUG: saving memory from 0x%x to 0x%x\n", curptr->addr, curptr->addr+curptr->size); if(write_chunk(curptr->addr, curptr->size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } return no_chunks; }

    Read the article

  • Efficient process SQL Server and numerical methods library

    - by darkcminor
    Is there a way to comunicate a numeric method library, that can exploit all .net's (numerical methods called by .net that does SQL Server things) habilities?, What library do you recomend, maybe using MATLAB, R? How to comunicate SQL Server and .net with such library or libraries? Do you have an example? What steps must be followed to make the link between numerical libraries, .net and SQL Server

    Read the article

  • Communication between parent and child

    - by Pierre
    Hi every one ! I have a little problem. I have to code a simple C application that creat a process and his child (fork()) and I have to do an operation. Parent initialize the values and child calculate. I write this : #include #include #include #include #include typedef struct { int op1; char op; int op2; }Operation; Operation *varOP; void finalResult() { float result = 0; if(varOP-op == '+') result = (varOP-op1 + varOP-op2); if(varOP-op == '-') result = (varOP-op1 - varOP-op2); if(varOP-op == '*') result = (varOP-op1 * varOP-op2); if(varOP-op == '+') result = (varOP-op1 / varOP-op2); printf("%f",result); } int main () { int p; varOP = (Operation *)malloc(sizeof(Operation)); p = fork(); if(p == 0) // If child { signal(SIGUSR1, finalResult ); pause(); } if(p 0) // If parent { varOP-op = '+'; varOP-op1 = 2; varOP-op2 = 3; kill(p, SIGUSR1); wait(NULL); } return 0; } But my child is never called. Is there something wrong with my code? Thanks for your help !

    Read the article

  • Jasper error: Caused by SQLServerException: Transaction (Process ID 58) was deadlocked on thread | c

    - by Saky
    I got the above error in my jasper report mail. The query that is used in the report is quite complicated (for me). Reading different posts I conclude that to solve this the I have to change the query to SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO BEGIN TRANSACTION ... my query ... COMMIT TRANSACTION ? I wonder if this is the correct way to solve the error and that if it has any side effects? Has it happened to anyone in the Jasper reports? Does anyone know if there is a better solution exist to the problem? (Although that I have not yet tested the above solution, if anyone can give any insight on this will be helpful.)

    Read the article

  • Xml files stop being served by IIS6 after allowing .net to process the .xml extension

    - by Brian Surowiec
    I added a route into my site to allow for a sitemap and everything worked fine in IIS7 but once I deployed the route stopped working. Since the live server is running IIS6 I needed to put a new mapping in for .xml to be processed by .net and then it started to work. My issue though is on every other xml file on the site now. I keep getting a 404 error when trying to view xml files, but the sitemap.xml route works. Is this a routing issue or an IIS setup issue? Here are my routes if it will help routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Gallery-Group-View", "Projects/{groupId}", new { controller = "Gallery", action = "GalleryList", groupId = "" }); routes.MapRoute( "Gallery-List-View", "Projects/{groupId}/{galleryId}", new { controller = "Gallery", action = "GalleryView", groupId = "", galleryId = "" }); routes.MapRoute( "Sitemap", "Sitemap.xml", new { controller = "XML", action = "Sitemap" } ); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } );

    Read the article

  • Oracle global lock across process

    - by Jimm
    I would like to synchronize access to a particular insert. Hence, if multiple applications execute this "one" insert, the inserts should happen one at a time. The reason behind synchronization is that there should only be ONE instance of this entity. If multiple applications try to insert the same entity,only one should succeed and others should fail. One option considered was to create a composite unique key, that would uniquely identify the entity and rely on unique constraint. For some reasons, the dba department rejected this idea. Other option that came to my mind was to create a stored proc for the insert and if the stored proc can obtain a global lock, then multiple applications invoking the same stored proc, though in their seperate database sessions, it is expected that the stored proc can obtain a global lock and hence serialize the inserts. My question is it possible to for a stored proc in oracle version 10/11, to obtain such a lock and any pointers to documentation would be helpful.

    Read the article

  • Integrate test process with Rspec and Cucumber in a plugin using Desert

    - by Romain Endelin
    Hello, I'm developing some rails plugins with desert, and I would like to use tests with RSpec and Cucumber in the development. RSpec is integrated by default in Desert plugins, but it gives me some bugs. Finally I have created a minimal application which use my plugin, and I try to test it like a normal application without plugin. But I dislike that solution, the plugin lost its modularity, and RSpec still returns me some bugs (he can't find the models/views/controllers located in the plugin...). So am I the only one who try to test a Desert plugin (or even in a classic plugin) ? May anyone know those matters and get a solution ?

    Read the article

  • Customizing error handling of JAXB unmarshall process

    - by ekeren
    Assuming I have a schema that describes a root element class Root that contains a List<Entry> where the Entry class has a required field name. Here is how it looks in code: @XmlRootElement class Root{ @XmlElement(name="entry") public List<Entry> entries = Lists.newArrayList(); } @XmlRootElement class Entry{ @XmlElement(name="name",required=true) public String name; } If I supply the following XML for unmarshalling: <root> <entry> <name>ekeren</name> </entry> <entry> </entry> </root> I have a problem because the second entry does not contain a name. So unmarshall produces null. Is there a way to customize JAXB to unmarshall a Root object that will only contain the "good" entry?

    Read the article

  • Threading process in asp.net

    - by Zerotoinfinite
    Hi All, I am using asp.net 3.5 and C#. I have a blog site and I want that whenever user enter any comment, the suscriber related to that post will get the notification. So what I am doing that I am sending mail at the same time as the comment is inserted into the table, which sometimes take time because of the quantity of user. Is their any way that user enter the comment into the database and the send mail function will run asynchornysly which will not interfear user to go ahead with his task. please let me know how to acheieve it in a simplier way. Thanks in advance

    Read the article

  • Execute a BASH command in Python-- in the same process

    - by Baldur
    I need to execute the command . /home/db2v95/sqllib/db2profile before I can import ibm_db_dbi in Python 2.6. Executing it before I enter Python works: baldurb@gigur:~$ . /home/db2v95/sqllib/db2profile baldurb@gigur:~$ python Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import ibm_db_dbi >>> but executing it in Python using os.system(". /home/db2v95/sqllib/db2profile") or subprocess.Popen([". /home/db2v95/sqllib/db2profile"]) results in an error. What am I doing wrong? Edit: this is the error I receive: > Traceback (most recent call last): > File "<file>.py", line 8, in > <module> > subprocess.Popen([". /home/db2v95/sqllib/db2profile"]) > File > "/usr/lib/python2.6/subprocess.py", > line 621, in __init__ > errread, errwrite) File "/usr/lib/python2.6/subprocess.py", > line 1126, in _execute_child > raise child_exception OSError: [Errno 2] No such file or directory

    Read the article

  • service broker message process order

    - by Blootac
    Everywhere I read says that messages handled by the service broker are processed in the order that they arrive, and yet if you create a table, message type, contract, service etc , and on activation have a stored proc that waits for 2 seconds and inserts the msg into a table, set the max queue readers to 5 or 10, and send 20 odd messages I can see in the table that they are inserted out of order even though when I insert them into the queue and look at the contents of the queue I can see that the messages are all in the right order. Is it due to the delay waitfor waiting for the nearest second and each thread having different subsecond times and then fighting for a lock or something? The reason i've got a delay in there is to simulate delays with joins etc Thanks demo code: --create the table and service broker CREATE TABLE test ( id int identity(1,1), contents varchar(100) ) CREATE MESSAGE TYPE test CREATE CONTRACT mycontract ( test sent by initiator ) GO CREATE PROCEDURE dostuff AS BEGIN DECLARE @msg varchar(100); RECEIVE TOP (1) @msg = message_body FROM myQueue IF @msg IS NOT NULL BEGIN WAITFOR DELAY '00:00:02' INSERT INTO test(contents)values(@msg) END END GO ALTER QUEUE myQueue WITH STATUS = ON, ACTIVATION ( STATUS = ON, PROCEDURE_NAME = dostuff, MAX_QUEUE_READERS = 10, EXECUTE AS SELF ) create service senderService on queue myQueue ( mycontract ) create service receiverService on queue myQueue ( mycontract ) GO --********************************************************** --now insert lots of messages to the queue DECLARE @dialog_handle uniqueidentifier BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>1</test>'); BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>2</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>3</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>4</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>5</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>6</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>7</test>')

    Read the article

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • How is Java Process.getOutputStream() Implemented?

    - by Amit Kumar
    I know the answer depends on the particular JVM, but I would like to understand how it is usually implemented? Is it in terms of popen (posix)? In terms of efficiency do I need to keep something in mind (other than using a Buffered stream as suggested by the javadoc). I would be interested to know if there is a general reference about implementations of JVMs which answers such questions.

    Read the article

  • Twisted - how to create multi protocol process and send the data between the protocols

    - by SpankMe
    Hey, Im trying to write a program that would be listening for data (simple text messages) on some port (say tcp 6666) and then pass them to one or more different protocols - irc, xmpp and so on. I've tried many approaches and digged the Internet, but I cant find easy and working solution for such task. The code I am currently fighting with is here: http://pastebin.com/ri7caXih I would like to know how to from object like: ircf = ircFactory('asdfasdf', '#asdf666') get access to self protocol methods, because this: self.protocol.dupa1(msg) returns error about self not being passed to active protocol object. Or maybe there is other, better, easier and more kosher way to create single reactor with multiple protocols and have actions triggeres when a message arrives on any of them, and then pass that message to other protocols for handling/processing/sending? Any help will be highly appreciated!

    Read the article

  • Silverlight file upload - file is in use by another process (Excel, Word)

    - by walkor
    Hi, all. I have a problem with uploading of the file in Silverlight application. Here is a code sample. In case when this file is opened in other application (excel or word for example) it fails to open it, otherwise it's working fine. I'm using OpenFileDialog to choose the file and pass it to this function. private byte[] GetFileContent(FileInfo file) { var result = new byte[] {}; try { using (var fs = file.OpenRead()) { result = new byte[file.Length]; fs.Read(result, 0, (int)file.Length); } } catch (Exception e) { // File is in use } return result; } Is there any way i can access this file or should i just notify the user that the file is locked?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >