Search Results

Search found 24784 results on 992 pages for 'process integration packs'.

Page 190/992 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • A question about making a C# class persistent during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • Continous Delivery TFS

    - by swapneel
    Is it possible to achieve Continuous Delivery using TFS e.g. Windows Service? There are 1000 posts on how to use msdeploy with TFS for WEB projects. I am trying to understand why there are no resources such as blogs, articles, msdn or best practises for Continuous Delivery for Windows service using TFS. I am not sure tow to achieve the following without any working reference materials. This is so frustrating. Archive existing codebase on Remote server for Service as well for Web project not on Integration Server please! How To Stop services on Remote server not on Integration Server Copy New code Base on Remote Server Start Services

    Read the article

  • How to avoid exceptions catches copy-paste in .NET

    - by Budda
    Working with .NET framework I have a service with a set of methods that can generates several types of exceptions: MyException2, MyExc1, Exception... To provide proper work for all methods, each of them contains following sections: void Method1(...) { try { ... required functionality } catch(MyException2 exc) { ... process exception of MyException2 type } catch(MyExc1 exc) { ... process exception of MyExc1 type } catch(Exception exc) { ... process exception of Exception type } ... process and return result if necessary } It is very boring to have exactly same stuff in EACH service method with exactly same exceptions processing functionality... Is there any possibility to "group" these catch-sections and use only one line (something similar to C++ macros)? Probably something new in .NET 4.0 is related to this topic? Thanks. P.S. Any thoughts are welcome.

    Read the article

  • Newer versions of chromium?

    - by user128712
    I've been wondering why the ubuntu's version of chromium is still on 25 -- from lucid up until raring -- while the current stable version is 28. The only reason I use chromium is to use the web apps integration. PS : I have tried to use the web apps integration on another version of chromium -- not google chrome -- which is 28 then it seems that some features I use was lost while using other version than 25. EDIT: Sorry I forgot to tell you I use the 13.04 version and also the development branch 13.10. My question was : Why does ubuntu doesn't update the package in the repository to the current stable version?

    Read the article

  • Detecting a message box opened in another application

    - by richie
    I am developing a windows service, in vb .et, that launches a legacy application that performs some work. The service acts as a wrapper around the legacy app allowing users to automate an otherwise manual operation. Everything is working great, except occasionally the legacy app displays a messagebox. When it does this the process halts until the message box is closed. As the service will be running on a server there will be no user to close the message box. The service launches the legacy application in a System.Diagnostics.Process. My question is, is there way to detect that a message box has been displayed by a process that I have started using System.Diagnostics.Process and is there a way to through code to close the messagebox. I've tried to be brief so if you need more information please let me know. Thanks in advance Richie

    Read the article

  • SQL Temp Tables & Replication

    - by Refracted Paladin
    I have had an issue with our replication process and would like to salvage some data. I have a process in place where I will connect to each subscriber before flagging them for reinitialization and I will run the below to pull any data they may have entered in during the "dark time". I am pretty sure this will work in a vanilla palace. What I am unsure of is whether the Global Temporary Table will persist through DB Replication. To be clear, I am not trying to Replicate the TempTable, I just want to make sure it will still exist at the local DB after the Replication so I may run the INSERT from it. Thoughts? USE MemberCenteredPlan -- Select Data from tblPLan SELECT * INTO ##MyPlan FROM tblPlan WHERE PlanID = 407869 --------------------------- -- Run Replication Process --------------------------- -- Insert Plan back into DB INSERT INTO tblPlan SELECT * FROM ##MyPlan WHERE PlanID = 407869 -- Drop Global Temp Table DROP TABLE ##MyPlan --------------------------- -- Run Replication Process ---------------------------

    Read the article

  • How to get the user account associated to the webdav request?

    - by vdk
    When accessing a webdav using Windows Explorer (Not IE), the call is redirected thru the svchost.exe process (with webclnt.dll). When i check get the pid of the process that is connected to the local port, i get the pid of the svchost.exe process. How can i get the user account that the call was associated to?

    Read the article

  • RSpec + Selenium tests for .NET on Windows

    - by John
    I'm a Rails developer doing TDD on a Mac with RSpec, Capybara and Selenium webdriver. Now I have been asked by my company to use this approach for a .NET on Windows environment. What is the best way of doing this? I could just install Ruby and use RSPEC, Capybara and Selenium webdriver for integration testing. But what about unit tests? I also looked at NSpec, but I'm not sure if I can combine that with Capybara or Selenium for integration tests. What would be a good approach here?

    Read the article

  • MPI Large Data all to all transfer

    - by csslayer
    My application of MPI has some process that generate some large data. Say we have N+1 process (one for master control, others are workers), each of worker processes generate large data, which is now simply write to normal file, named file1, file2, ..., fileN. The size of each file may be quite different. Now I need to send all fileM to rank M process to do the next job, So it's just like all to all data transfer. My problem is how should I use MPI API to send these files efficiently? I used to use windows share folder to transfer these before, but I think it's not a good idea. I have think about MPI_file and MPI_All_to_all, but these functions seems not to be so suitable for my case. Simple MPI_Send and MPI_Recv seems hard to be used because every process need to transfer large data, and I don't want to use distributed file system for now.

    Read the article

  • Is it possible to get Logged in user Non Restricted token from a service on Vista?

    - by coolcake
    Hello All, I need to create a process with integrity level high, so that it can do all the administrative tasks. But the created process should run in the current logged in desktop i.e. it should not run in session 0. By default only administrators will log on to the console. The service should launch the process, as service is running in session 0 and system account. Can it any how get the non restricted token and use it in CreateProcessAsUser, so that the process created does have integrity level of high or system. Is it possible? One more thing is i should get the non restricted token with out prompting for user name or password of the logged in user. Thanks

    Read the article

  • LINQ extention SelectMany in 3.5 vs 4.0?

    - by Moberg
    Hi When I saw Darins suggestion here .. IEnumerable<Process> processes = new[] { "process1", "process2" } .SelectMany(Process.GetProcessesByName); ( http://stackoverflow.com/questions/3059667/process-getprocessesbyname/3059733#3059733 ) .. I was a bit intrigued and I tried it in VS2008 with .NET 3.5 - and it did not compiling unless I changed it to .. IEnumerable<Process> res = new string[] { "notepad", "firefox", "outlook" } .SelectMany(s => Process.GetProcessesByName(s)); Having read some Darins answers before I suspected that it was me that were the problem, and when I later got my hands on a VS2010 with.NET 4.0 - as expected - the original suggestion worked beautifully. My question is : What have happend from 3.5 to 4.0 that makes this (new syntax) possible? Is it the extentionmethods that have been extended(hmm) or new rules for lambda syntax or? I've tried to search but my google-fu was not strong enough. Please forgive if the question is a bit naive and note that I've taged it as beginner :)

    Read the article

  • Powershell interact with open Excel

    - by HKK
    To interact with excel in Powershell it is common to start a new excel as follows: $x = New-Object -comobject Excel.Application Instead of that I have an open Excel process already. (I get it as follows) $excelprocess = Get-Process | Where-Object {$_.name -eq "excel"} | Sort-Object -Property "Starttime" -descending | Select-Object -First 1 Is there a way to interact with this specific excel process over PS?

    Read the article

  • navigate one screen to another screen based on time period in blackberry-Eclips

    - by upendra
    Hi Friends, I am using Blackberry eclipse environment , for blackberry application i am using start up screen with process bar , i am filing progress bar using timer, after process bar 100% completed then need to navigate the screen to another screen, i am checking like this timer.cancel(); if(i=99) UiApplication.getUiApplication().pushScreen(new TipCalculatorScreen()); here i is time, increased form 0 to 100 but this code is not working how to navigate the screen form one page to another page after process bar is completed, please give me small example.

    Read the article

  • Pattern or recommneded refactoring for method

    - by iKode
    I've written a method that looks like this: public TimeSlotList processTimeSlots (DateTime startDT, DateTime endDT, string bookingType, IList<Booking> normalBookings, GCalBookings GCalBookings, List<DateTime> otherApiBookings) { { ..... common process code ...... while (utcTimeSlotStart < endDT) { if (bookingType == "x") { //process normal bookings using IList<Booking> normalBookings } else if (bookingType == "y") { //process google call bookings using GCalBookings GCalBookings } else if (bookingType == "z" { //process other apibookings using List<DateTime> otherApiBookings } } } So I'm calling this from 3 different places, each time passing a different booking type, and each case passing the bookings I'm interested in processing, as well as 2 empty objects that aren't used for this booking type. I'm not able to get bookings all into the same datatype, which would make this easier and each booking type needs to be processed differently, so I'm not sure how I can improve this. Any ideas?

    Read the article

  • Come visit us at OOW 2012 B2B Demo Booth!

    - by user701307
    You’re invited to visit us at the Oracle B2B Demo POD at Oracle OpenWorld and JavaOne 2012. OOW offers a unique opportunity to meet the engineers who have developed the Oracle B2B product. Please stop by at our booth to see cool demos on EDI X12, EDIFACT and SBRES (used in Airlines industry). We will also be showing integration with OSB, SOA Suite and BAM. Use this opportunity to see the product in action, learn, and get answers to your questions. We will be happy to meet you and hear about your B2B integration usecases and discuss our roadmap. The demo pod will be available at the Fusion Middleware Demo POD area on Monday, October 1 through Wednesday, October 3, 2012. Look forward to seeing you there! Happy OOW 2012!

    Read the article

  • applescript click every element via gui scripting

    - by user141146
    Hi, I'd like to iterate through every element on an iTunes window and try to click on each element. I'd also like to write to a text file showing each element that I've clicked. The code that I wrote below isn't working. Specifically, I get the error process "iTunes" doesn’t understand the click_an_element message. Thoughts on what I'm doing wrong? Thanks!! tell application "iTunes" to activate tell application "System Events" tell process "iTunes" set elements to get entire contents of window "iTunes" repeat with i from 1 to (length of elements) set ele to item i of elements click_an_element(ele) show_what_you_clicked(ele) end repeat end tell end tell -------handlers------------ to click_an_element(an_element) tell application "iTunes" to activate tell application "System Events" tell process "iTunes" try click an_element end try end tell end tell end click_an_element to show_what_you_clicked(thing_to_type) tell application "TextEdit" to activate tell application "System Events" tell process "TextEdit" keystroke thing_to_type key code 36 end tell end tell end show_what_you_clicked

    Read the article

  • How to define 2-bit numbers in C, if possible?

    - by Eddy
    For my university process I'm simulating a process called random sequential adsorption. One of the things I have to do involves randomly depositing squares (which cannot overlap) onto a lattice until there is no more room left, repeating the process several times in order to find the average 'jamming' coverage %. Basically I'm performing operations on a large array of integers, of which 3 possible values exist: 0, 1 and 2. The sites marked with '0' are empty, the sites marked with '1' are full. Initially the array is defined like this: int i, j; int n = 1000000000; int array[n][n]; for(j = 0; j < n; j++) { for(i = 0; i < n; i++) { array[i][j] = 0; } } Say I want to deposit 5*5 squares randomly on the array (that cannot overlap), so that the squares are represented by '1's. This would be done by choosing the x and y coordinates randomly and then creating a 5*5 square of '1's with the topleft point of the square starting at that point. I would then mark sites near the square as '2's. These represent the sites that are unavailable since depositing a square at those sites would cause it to overlap an existing square. This process would continue until there is no more room left to deposit squares on the array (basically, no more '0's left on the array) Anyway, to the point. I would like to make this process as efficient as possible, by using bitwise operations. This would be easy if I didn't have to mark sites near the squares. I was wondering whether creating a 2-bit number would be possible, so that I can account for the sites marked with '2'. Sorry if this sounds really complicated, I just wanted to explain why I want to do this.

    Read the article

  • Processes Allocation in .Net

    - by mayap
    I'm writing some program which should perform calculations concurrently according to inputs which reach the system all the time. I'm considering 2 approaches to allocate "calculation" processes: Allocating processes in the system initialization, insert the ids to Processes table, and each time I want to perform calculation, I will check in the table which process is free. The questions: can I be sure that those processes are only for my use and that the operating system doesn't use them? Not allocating processes in advance. Each time when calculation should be done ask the operating system for free process. I need to know the following inputs from a "calculation" process: When calculation is finished and also if it succeeded or failed If a processes has failed I need to assign the calculation to another process Thanks in advance. Any help would be appreciated.

    Read the article

  • User switching without logging off

    - by mrh1967
    We need to switch users without logging off so we can remotely administrate a PC running with a limited user that will disconnect from the VPN if the user logs off. I've got this working by killing the explorer process and then running explorer.exe with the administrator user credentials as the following code shows: private void btnOk_Click(object sender, EventArgs e) { IntPtr tokenHandle = new IntPtr(0); if (LogonUser("administrator", Environment.UserDomainName, txtPassword.Text, 3, 0, ref tokenHandle)) { ProcessStartInfo psi = new ProcessStartInfo(@"C:\Windows\explorer.exe"); psi.UserName = "administrator"; char[] pword = txtPassword.Text.ToCharArray(); psi.Password = new System.Security.SecureString(); foreach (char c in pword) { psi.Password.AppendChar(c); } psi.UseShellExecute = false; psi.LoadUserProfile = true; restartExplorer(psi); this.Close(); } else { MessageBox.Show("Wrong password", "Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } } private void restartExplorer(ProcessStartInfo psi) { Process[] procs = System.Diagnostics.Process.GetProcesses(); foreach (Process p in procs) { if (p.ProcessName == "explorer") { p.Kill(); break; } } System.Diagnostics.Process.Start(psi); } [DllImport("advapi32.dll", SetLastError = true)] public extern static bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); This code and similar code that does the same but makes the ProcessStartInfo for the limited user works perfectly and allows changing between the limited and administrator accounts without disconnecting the VPN but it has one problem - If we use this to change to the administrator user, make some changes to the system, then change back to the limited user all works ok until the limited user logs off when a blank desktop is displayed until CTRL-ALT-DEL is pressed and the user is logged off again. Because we block CTRL-ALT-DEL the PC effectively hangs until it is powered off. Does anyone know how to stop this from happening so we can change users without the PC hanging when they log off?

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >