Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 139/861 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • A question about making a C# class persistent during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • SQL Temp Tables & Replication

    - by Refracted Paladin
    I have had an issue with our replication process and would like to salvage some data. I have a process in place where I will connect to each subscriber before flagging them for reinitialization and I will run the below to pull any data they may have entered in during the "dark time". I am pretty sure this will work in a vanilla palace. What I am unsure of is whether the Global Temporary Table will persist through DB Replication. To be clear, I am not trying to Replicate the TempTable, I just want to make sure it will still exist at the local DB after the Replication so I may run the INSERT from it. Thoughts? USE MemberCenteredPlan -- Select Data from tblPLan SELECT * INTO ##MyPlan FROM tblPlan WHERE PlanID = 407869 --------------------------- -- Run Replication Process --------------------------- -- Insert Plan back into DB INSERT INTO tblPlan SELECT * FROM ##MyPlan WHERE PlanID = 407869 -- Drop Global Temp Table DROP TABLE ##MyPlan --------------------------- -- Run Replication Process ---------------------------

    Read the article

  • How to avoid exceptions catches copy-paste in .NET

    - by Budda
    Working with .NET framework I have a service with a set of methods that can generates several types of exceptions: MyException2, MyExc1, Exception... To provide proper work for all methods, each of them contains following sections: void Method1(...) { try { ... required functionality } catch(MyException2 exc) { ... process exception of MyException2 type } catch(MyExc1 exc) { ... process exception of MyExc1 type } catch(Exception exc) { ... process exception of Exception type } ... process and return result if necessary } It is very boring to have exactly same stuff in EACH service method with exactly same exceptions processing functionality... Is there any possibility to "group" these catch-sections and use only one line (something similar to C++ macros)? Probably something new in .NET 4.0 is related to this topic? Thanks. P.S. Any thoughts are welcome.

    Read the article

  • Detecting a message box opened in another application

    - by richie
    I am developing a windows service, in vb .et, that launches a legacy application that performs some work. The service acts as a wrapper around the legacy app allowing users to automate an otherwise manual operation. Everything is working great, except occasionally the legacy app displays a messagebox. When it does this the process halts until the message box is closed. As the service will be running on a server there will be no user to close the message box. The service launches the legacy application in a System.Diagnostics.Process. My question is, is there way to detect that a message box has been displayed by a process that I have started using System.Diagnostics.Process and is there a way to through code to close the messagebox. I've tried to be brief so if you need more information please let me know. Thanks in advance Richie

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • How to get the user account associated to the webdav request?

    - by vdk
    When accessing a webdav using Windows Explorer (Not IE), the call is redirected thru the svchost.exe process (with webclnt.dll). When i check get the pid of the process that is connected to the local port, i get the pid of the svchost.exe process. How can i get the user account that the call was associated to?

    Read the article

  • MPI Large Data all to all transfer

    - by csslayer
    My application of MPI has some process that generate some large data. Say we have N+1 process (one for master control, others are workers), each of worker processes generate large data, which is now simply write to normal file, named file1, file2, ..., fileN. The size of each file may be quite different. Now I need to send all fileM to rank M process to do the next job, So it's just like all to all data transfer. My problem is how should I use MPI API to send these files efficiently? I used to use windows share folder to transfer these before, but I think it's not a good idea. I have think about MPI_file and MPI_All_to_all, but these functions seems not to be so suitable for my case. Simple MPI_Send and MPI_Recv seems hard to be used because every process need to transfer large data, and I don't want to use distributed file system for now.

    Read the article

  • LINQ extention SelectMany in 3.5 vs 4.0?

    - by Moberg
    Hi When I saw Darins suggestion here .. IEnumerable<Process> processes = new[] { "process1", "process2" } .SelectMany(Process.GetProcessesByName); ( http://stackoverflow.com/questions/3059667/process-getprocessesbyname/3059733#3059733 ) .. I was a bit intrigued and I tried it in VS2008 with .NET 3.5 - and it did not compiling unless I changed it to .. IEnumerable<Process> res = new string[] { "notepad", "firefox", "outlook" } .SelectMany(s => Process.GetProcessesByName(s)); Having read some Darins answers before I suspected that it was me that were the problem, and when I later got my hands on a VS2010 with.NET 4.0 - as expected - the original suggestion worked beautifully. My question is : What have happend from 3.5 to 4.0 that makes this (new syntax) possible? Is it the extentionmethods that have been extended(hmm) or new rules for lambda syntax or? I've tried to search but my google-fu was not strong enough. Please forgive if the question is a bit naive and note that I've taged it as beginner :)

    Read the article

  • Is it possible to get Logged in user Non Restricted token from a service on Vista?

    - by coolcake
    Hello All, I need to create a process with integrity level high, so that it can do all the administrative tasks. But the created process should run in the current logged in desktop i.e. it should not run in session 0. By default only administrators will log on to the console. The service should launch the process, as service is running in session 0 and system account. Can it any how get the non restricted token and use it in CreateProcessAsUser, so that the process created does have integrity level of high or system. Is it possible? One more thing is i should get the non restricted token with out prompting for user name or password of the logged in user. Thanks

    Read the article

  • Powershell interact with open Excel

    - by HKK
    To interact with excel in Powershell it is common to start a new excel as follows: $x = New-Object -comobject Excel.Application Instead of that I have an open Excel process already. (I get it as follows) $excelprocess = Get-Process | Where-Object {$_.name -eq "excel"} | Sort-Object -Property "Starttime" -descending | Select-Object -First 1 Is there a way to interact with this specific excel process over PS?

    Read the article

  • navigate one screen to another screen based on time period in blackberry-Eclips

    - by upendra
    Hi Friends, I am using Blackberry eclipse environment , for blackberry application i am using start up screen with process bar , i am filing progress bar using timer, after process bar 100% completed then need to navigate the screen to another screen, i am checking like this timer.cancel(); if(i=99) UiApplication.getUiApplication().pushScreen(new TipCalculatorScreen()); here i is time, increased form 0 to 100 but this code is not working how to navigate the screen form one page to another page after process bar is completed, please give me small example.

    Read the article

  • try finally in ant

    - by Grzenio
    Hi, In my ant script, which runs the end-to-end integration tests, I first start a process, then do some other stuff, then run the tests, and then I need to make sure I kill the process. However, I need to make sure I kill the process even if something fails (so I need an equivalent to try finally). What is the recommended way of doing it?

    Read the article

  • Pattern or recommneded refactoring for method

    - by iKode
    I've written a method that looks like this: public TimeSlotList processTimeSlots (DateTime startDT, DateTime endDT, string bookingType, IList<Booking> normalBookings, GCalBookings GCalBookings, List<DateTime> otherApiBookings) { { ..... common process code ...... while (utcTimeSlotStart < endDT) { if (bookingType == "x") { //process normal bookings using IList<Booking> normalBookings } else if (bookingType == "y") { //process google call bookings using GCalBookings GCalBookings } else if (bookingType == "z" { //process other apibookings using List<DateTime> otherApiBookings } } } So I'm calling this from 3 different places, each time passing a different booking type, and each case passing the bookings I'm interested in processing, as well as 2 empty objects that aren't used for this booking type. I'm not able to get bookings all into the same datatype, which would make this easier and each booking type needs to be processed differently, so I'm not sure how I can improve this. Any ideas?

    Read the article

  • Processes Allocation in .Net

    - by mayap
    I'm writing some program which should perform calculations concurrently according to inputs which reach the system all the time. I'm considering 2 approaches to allocate "calculation" processes: Allocating processes in the system initialization, insert the ids to Processes table, and each time I want to perform calculation, I will check in the table which process is free. The questions: can I be sure that those processes are only for my use and that the operating system doesn't use them? Not allocating processes in advance. Each time when calculation should be done ask the operating system for free process. I need to know the following inputs from a "calculation" process: When calculation is finished and also if it succeeded or failed If a processes has failed I need to assign the calculation to another process Thanks in advance. Any help would be appreciated.

    Read the article

  • applescript click every element via gui scripting

    - by user141146
    Hi, I'd like to iterate through every element on an iTunes window and try to click on each element. I'd also like to write to a text file showing each element that I've clicked. The code that I wrote below isn't working. Specifically, I get the error process "iTunes" doesn’t understand the click_an_element message. Thoughts on what I'm doing wrong? Thanks!! tell application "iTunes" to activate tell application "System Events" tell process "iTunes" set elements to get entire contents of window "iTunes" repeat with i from 1 to (length of elements) set ele to item i of elements click_an_element(ele) show_what_you_clicked(ele) end repeat end tell end tell -------handlers------------ to click_an_element(an_element) tell application "iTunes" to activate tell application "System Events" tell process "iTunes" try click an_element end try end tell end tell end click_an_element to show_what_you_clicked(thing_to_type) tell application "TextEdit" to activate tell application "System Events" tell process "TextEdit" keystroke thing_to_type key code 36 end tell end tell end show_what_you_clicked

    Read the article

  • How to define 2-bit numbers in C, if possible?

    - by Eddy
    For my university process I'm simulating a process called random sequential adsorption. One of the things I have to do involves randomly depositing squares (which cannot overlap) onto a lattice until there is no more room left, repeating the process several times in order to find the average 'jamming' coverage %. Basically I'm performing operations on a large array of integers, of which 3 possible values exist: 0, 1 and 2. The sites marked with '0' are empty, the sites marked with '1' are full. Initially the array is defined like this: int i, j; int n = 1000000000; int array[n][n]; for(j = 0; j < n; j++) { for(i = 0; i < n; i++) { array[i][j] = 0; } } Say I want to deposit 5*5 squares randomly on the array (that cannot overlap), so that the squares are represented by '1's. This would be done by choosing the x and y coordinates randomly and then creating a 5*5 square of '1's with the topleft point of the square starting at that point. I would then mark sites near the square as '2's. These represent the sites that are unavailable since depositing a square at those sites would cause it to overlap an existing square. This process would continue until there is no more room left to deposit squares on the array (basically, no more '0's left on the array) Anyway, to the point. I would like to make this process as efficient as possible, by using bitwise operations. This would be easy if I didn't have to mark sites near the squares. I was wondering whether creating a 2-bit number would be possible, so that I can account for the sites marked with '2'. Sorry if this sounds really complicated, I just wanted to explain why I want to do this.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • User switching without logging off

    - by mrh1967
    We need to switch users without logging off so we can remotely administrate a PC running with a limited user that will disconnect from the VPN if the user logs off. I've got this working by killing the explorer process and then running explorer.exe with the administrator user credentials as the following code shows: private void btnOk_Click(object sender, EventArgs e) { IntPtr tokenHandle = new IntPtr(0); if (LogonUser("administrator", Environment.UserDomainName, txtPassword.Text, 3, 0, ref tokenHandle)) { ProcessStartInfo psi = new ProcessStartInfo(@"C:\Windows\explorer.exe"); psi.UserName = "administrator"; char[] pword = txtPassword.Text.ToCharArray(); psi.Password = new System.Security.SecureString(); foreach (char c in pword) { psi.Password.AppendChar(c); } psi.UseShellExecute = false; psi.LoadUserProfile = true; restartExplorer(psi); this.Close(); } else { MessageBox.Show("Wrong password", "Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } } private void restartExplorer(ProcessStartInfo psi) { Process[] procs = System.Diagnostics.Process.GetProcesses(); foreach (Process p in procs) { if (p.ProcessName == "explorer") { p.Kill(); break; } } System.Diagnostics.Process.Start(psi); } [DllImport("advapi32.dll", SetLastError = true)] public extern static bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); This code and similar code that does the same but makes the ProcessStartInfo for the limited user works perfectly and allows changing between the limited and administrator accounts without disconnecting the VPN but it has one problem - If we use this to change to the administrator user, make some changes to the system, then change back to the limited user all works ok until the limited user logs off when a blank desktop is displayed until CTRL-ALT-DEL is pressed and the user is logged off again. Because we block CTRL-ALT-DEL the PC effectively hangs until it is powered off. Does anyone know how to stop this from happening so we can change users without the PC hanging when they log off?

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • What to do if exec() fails?

    - by Grigory
    Let's suppose we have a code doing something like this: int pipes[2]; pipe(pipes); pid_t p = fork(); if(0 == p) { dup2(pipes[1], STDOUT_FILENO); execv("/path/to/my/program", NULL); ... } else { //... parent process stuff } As you can see, it's creating a pipe, forking and using the pipe to read the child's output (I can't use popen here, because I also need the PID of the child process for other purposes). Question is, what should happen if in the above code, execv fails? Should I call exit() or abort()? As far as I know, those functions close the open file descriptors. Since fork-ed process inherits the parent's file descriptors, does it mean that the file descriptors used by the parent process will become unusable?

    Read the article

  • memory management objective c - returning objects from methods

    - by geeth
    Hi, Please clarify, how to deal with returned objects from methods? Below, I get employee details from GeEmployeetData function with autorelease, 1. Do I have to retain the returned object in Process method? 2. Can I release *emp in Process fucntion? -(void) Process { Employee *emp = [self GeEmployeetData] } +(Employee*) GeEmployeetData{ Employee *emp = [[Employee alloc]init]; //fill entity return [emp autorelease]; }

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >