Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 243/976 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • ID3D10Device Memory Allocation Strategy and E_OUTOFMEMORY

    - by Buzz
    Hi,guys, I want to know more detail of memory allocation strategy in D3D10Device. Could you give me some help? First questions is: I know D3D10 has done some work on memory virtualization that means client don't need to consider where the buffer was reserved, GPU memory, AGP memory or Process system memory. Is this correct? Second question is: When I use ID3D10Device to CreateBuffer continuously, no matter what buffer desc type is, for example ID3D10Device::CreateBuffer( ... D3D10_USAGE_DEFAULT ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_IMMUTABLE ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_DYNAMIC ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_STAGING ... ); etc, if CreateBuffer return error code "E_OUTOFMEMORY", does that mean process virtual memory is exhausted? And at this time, memory allocation on process default heap would also be failed? Thanks in advance!

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Difference between SQL 2005 and SQL 2008 for inserting multiple rows with XML

    - by Sam Dahan
    I am using the following SQL code for inserting multiple rows of data in a table. The data is passed to the stored procedure using an XML variable : INSERT INTO MyTable SELECT SampleTime = T.Item.value('SampleTime[1]', 'datetime'), Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/MyRecord') T(item) I have a whole bunch of unit tests to verify that I am inserting the right information, the right number of records, etc.. when I call the stored procedure. All fine and dandy - that is, until we began to monkey around with the compatibility level of the database. The code above worked beautifully as long as we kept the compatibility level of the DB at 90 (SQL 2005). When we set the compatibility level at 100 (SQL 2008), the unit tests failed, because the stored procedure using the code above times out. The unit tests are dropping the database, re-creating it from scripts, and running the tests on the brand new DB, so it's not - I think - a question of the 'old compatibility level' sticking around. Using the SQL Management studio, I made up a quick test SQL script. Using the same XML chunk, I alter the DB compat level , truncate the table, then use the code above to insert 650 rows. When the level is 90 (SQL 2005), it runs in milliseconds. When the level is 100 (SQL 2008) it sometimes takes over a minute, sometimes runs in milliseconds. I'd appreciate any insight anyone might have into that. EDIT The script takes over a minute to run with my actual data, which has more rows than I show here, is a real table, and has an index. With the following example code, the difference goes between milliseconds and around 5 seconds. --use [master] --ALTER DATABASE MyDB SET compatibility_level =100 use [MyDB] declare @xml xml set @xml = '<?xml version="1.0"?> <Root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Record> <SampleTime>2009-01-24T00:00:00</SampleTime> <Volume1>0</Volume1> <Volume2>0</Volume2> </Record> ..... 653 records, sample time spaced out 4 hours ........ </Root>' DECLARE @myTable TABLE( ID int IDENTITY(1,1) NOT NULL, [SampleTime] [datetime] NOT NULL, [Volume1] [float] NULL, [Volume2] [float] NULL) INSERT INTO @myTable select T.Item.value('SampleTime[1]', 'datetime') as SampleTime, Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/Record') T(item) I uncomment the 2 lines at the top, select them and run just that (the ALTER DATABASE statement), then comment the 2 lines, deselect any text and run the whole thing. When I change from 90 to 100, it runs all the time in 5 seconds (I change the level once, but I run the series several times to see if I have consistent results). When I change from 100 to 90, it runs in milliseconds all the time. Just so you can play with it too. I am using SQL Server 2008 R2 standard edition.

    Read the article

  • How to handle refunds or rebates via a payment processor?

    - by Tai Squared
    I need to handle online payments and am trying to choose a payment processor. One requirement is to handle refunds and rebates to the customer. These won't always be at the time of sale, and not for the entire amount of the purchase. Is this something all payment processors handle? I don't want to have to do this manually as there may be many rebates, and they may be for relatively small amounts. I see PayPal has a refund API, but other parts of their site talk about sending a refund within 60 days. Is this something also required by the API? Amazon FPS also has a refund API that seems a bit more flexible. The Google Checkout refund has an amout field, but it's unclear to me if you can do a partial refund as the description reads "The refund-order command instructs Google Checkout to refund the buyer for a particular order." What are some things to look out for when looking for a payment processor that can handle rebates and refunds? Is there always a time limit in issuing these refunds? Is using a merchant account better for this type of process? I was hoping to avoid that due to the increased cost and complexity, but would consider it if it meets all of my requirements. Update It appears the refund process is fairly simple and handled by all processors. Is there any additional information on rebates? I would like to avoid a process of sending live checks to customers, but I will have to send rebates in some small amounts that may be a few months after the initial purchase.

    Read the article

  • playing a dvd using C#

    - by user203212
    Hi, I have a dvd copied in my hard drive. It has a folder called video_ts. I am planning to run VLC player to play it. i was wondering how can I play this dvd using c#. I do not want to use a activex control inside c#. All I need to do is run the vlc.exe using process. I have already done that. But how do I select a specific file from the code that will start playig in the vlc player. My code is. System.Diagnostics.Process Proc = new System.Diagnostics.Process(); Proc.StartInfo.FileName = @"C:\Program Files\VideoLAN\VLC\vlc.exe"; Proc.StartInfo.Arguments = @"C:\Test\Legacy\VIDEO_TS\VIDEO_TS.BUP"; Proc.Start(); I am trying to send the file name as a argument to run it in the vlc.exe . But its not working. Its just opening up the vlc player. I dont want the user to select the file manually.

    Read the article

  • How to cache pages using background jobs ?

    - by Alexandre
    Definitions: resource = collection of database records, regeneration = processing these records and outputting the corresponding html Current flow: Receive client request Check for resource in cache If not in cache or cache expired, regenerate Return result The problem is that the regeneration step can tie up a single server process for 10-15 seconds. If a couple of users request the same resource, that could result in a couple of processes regenerating the exact same resource simultaneously, each taking up 10-15 seconds. Wouldn't it be preferrable to have the frontend signal some background process saying "Hey, regenerate this resource for me". But then what would it display to the user? "Rebuilding" is not acceptable. All resources would have to be in cache ahead of time. This could be a problem as the database would almost be duplicated on the filesystem (too big to fit in memory). Is there a way to avoid this? Not ideal, but it seems like the only way out. But then there's one more problem. How to keep the same two processes from requesting the regeneration of a resource at the same time? The background process could be regenerating the resource when a frontend asks for the regeneration of the same resource. I'm using PHP and the Zend Framework just in case someone wants to offer a platform-specific solution. Not that it matters though - I think this problem applies to any language/framework. Thanks!

    Read the article

  • Cookie add in the Global.asax warning in application log

    - by Ioxp
    In my Global.ASAX file i have the following: System.Web.HttpCookie isAccess = new System.Web.HttpCookie("IsAccess"); isAccess.Expires = DateTime.Now.AddDays(-1); isAccess.Value = ""; System.Web.HttpContext.Current.Response.Cookies.Add(isAccess); So every time this method this is logged in the application events as a warning: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 5/25/2010 12:23:20 PM Event time (UTC): 5/25/2010 4:23:20 PM Event ID: c515e27a28474eab8d99720c3f5a8e90 Event sequence: 4148 Event occurrence: 332 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2100509645/Root-1-129192259222289896 Trust level: Full Application Virtual Path: / Application Path: <PathRemoved>\www\ Machine name: TIPPER Process information: Process ID: 6936 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: NullReferenceException Exception message: Object reference not set to an instance of an object. Request information: Request URL: Request path: User host address: User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 7 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at ASP.global_asax.Session_End(Object sender, EventArgs e) in <PathRemoved>\Global.asax:line 113 Any idea why this code would cause this error?

    Read the article

  • Javascript: Error 'Object Required.'

    - by javascripthelp
    The following is the error popup message I get when I click "Finalize" button on my website: "Line: 298 Char: 5 Error: Object required: 'lobi_c_selected(...)' Code: 0 URL: http://10.128.23.50/i-prostage/AP/w_ap_check_reconciliation.asp?" Normally, when I click "Finalize" button, it'll generate and show a report in a popup window. However, I'd get this error message instead. Can any of you help me locate the error in the following source code for the page that I'm running in IE 6? sub cb_finalize dim ll_loop, ll_found, lobj_c_selected of_SetHourGlass(True) rpt_link.innerHTML = "" rpt_link.href = "" 'Process only if at least one record was selected if rds1.Recordset.Recordcount 0 then lb_found = false if rds1.Recordset.Recordcount = 1 then if c_selected.checked then lb_found = true else Set lobj_c_selected = document.all.item("c_selected") for ll_loop = 1 to rds1.Recordset.Recordcount if lobj_c_selected(ll_loop - 1).checked then lb_found = true exit for end if next end if if not lb_found then msgbox "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) else window.setTimeout "ue_process()", 100, vbscript 'Post Event end if else msgbox "There's no record to be posted." + vbcrlf + "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) end if end sub Sub ue_process dim ll_loop, ll_count, ls_ret, ls_trxid, ls_r1 'Get only selected records redim ls_trxid(rds1.Recordset.Recordcount) for ll_loop = 1 to rds1.Recordset.Recordcount rds1.Recordset.AbsolutePosition = ll_loop if not isnull(rds1.Recordset("clrdt")) then 'Add to TRXID array if selected ll_count = ll_count + 1 ls_trxid(ll_count) = rds1.Recordset("trxid") end if next 'Process reconciliation rds1.Recordset.MarshalOptions = 1 ls_ret = iBO_Update.of_update_1(is_dbsrc, rds1.Recordset, "GLTRX", is_sql) if ls_ret < "1" then msgbox "Update Failed ! " + ls_ret, vbExclamation + vbOKonly, document.title of_SetHourGlass(False) else 'Display Posting Journal & clear screen ue_posting_journal ls_trxid, ll_count Set rds1.SourceRecordset = iBO_Company.of_validate(is_dbsrc, "SELECT 1 FROM DUAL WHERE 1 = 2") ib_query = false 'Not to process RetrieveEnd end if End Sub Sub ue_posting_journal(as_trxid, al_count) dim ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq of_setreport() ' Start service 'Prepare arguments for report in RPTMSTR table for ll_loop = 1 to al_count + 1 select case ll_loop case 1 'Range displayed as report title ll_argseq = 800 ls_argtyp = null ls_argmnt = "st_title.text = 'Bank: " + bnkid_name.value + ", As of Date: " + _ of_date_stringtodate(id_trxdt) + "'" ll_sargseq = 0 case else 'TRXID array ll_argseq = 1 ll_sargseq = ll_loop - 1 ls_argtyp = "S" ls_argmnt = as_trxid(ll_loop - 1) end select of_report_register_array "d_rpt_ap_check_reconciliation_register", ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq next of_report_process "d_rpt_ap_check_reconciliation_register", true, true 'Display report of_sethourglass(False) End Sub

    Read the article

  • Writing/Reading struct w/ dynamic array through pipe in C

    - by anrui
    I have a struct with a dynamic array inside of it: struct mystruct{ int count; int *arr; }mystruct_t; and I want to pass this struct down a pipe in C and around a ring of processes. When I alter the value of count in each process, it is changed correctly. My problem is with the dynamic array. I am allocating the array as such: mystruct_t x; x.arr = malloc( howManyItemsDoINeedToStore * sizeof( int ) ); Each process should read from the pipe, do something to that array, and then write it to another pipe. The ring is set up correctly; there's no problem there. My problem is that all of the processes, except the first one, are not getting a correct copy of the array. I initialize all of the values to, say, 10 in the first process; however, they all show up as 0 in the subsequent ones. for( j = 0; j < howManyItemsDoINeedToStore; j++ ){ x.arr[j] = 10; } Initally: 10 10 10 10 10 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 Now, if I alter my code to, say, struct mystruct{ int count; int arr[10]; }mystruct_t; everything is passed correctly down the pipe, no problem. I am using READ and WRITE, in C: write( STDOUT_FILENO, &x, sizeof( mystruct_t ) ); read( STDIN_FILENO, &x, sizeof( mystruct_t ) ); Any help would be appreciated. Thanks in advance!

    Read the article

  • Naming multi-instance performance counters in .NET

    - by Roger Lipscombe
    Most multiple instance performance counters in Windows seem to automatically(?) have a #n on the end if there's more than one instance with the same name. For example: if, in Perfmon, you look under the Process category, you'll see: ... dwm explorer explorer#1 ... I have two explorer.exe processes, so the second counter has #1 appended to its name. When I attempt to do this in a .NET application: I can create the category, and register the instance (using the PerformanceCounterCategory.Create that takes a CounterCreationDataCollection). I can open the counter for write and write to it. When I open the counter a second time, it opens the same counter. This means that I have two applications fighting over the counters. The documentation for PerformanceCounter.InstanceName states that # is not allowed in the name. So: how do I have multiple-instance performance counters that are actually multiple instance? And where the second (and subsequent) instances get #n appended to the name? That is: I know that I can put the process ID (e.g.) on the instance name. This works, but has the unfortunate side effect that restarting the process results in a new PID, and Perfmon continues monitoring the old counter.

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • How to make a thread that runs at x:00 x:15 x:30 and x:45 do something different at 2:00.

    - by rmarimon
    I have a timer thread that needs to run at a particular moments of the day to do an incremental replication with a database. Right now it runs at the hour, 15 minutes past the hour, 30 minutes past the hour and 45 minutes past the hour. This is the code I have which is working ok: public class TimerRunner implements Runnable { private static final Semaphore lock = new Semaphore(1); private static final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); public static void initialize() { long delay = getDelay(); executor.schedule(new TimerRunner(), delay, TimeUnit.SECONDS); } public static void destroy() { executor.shutdownNow(); } private static long getDelay() { Calendar now = Calendar.getInstance(); long p = 15 * 60; // run at 00, 15, 30 and 45 minutes past the hour long second = now.get(Calendar.MINUTE) * 60 + now.get(Calendar.SECOND); return p - (second % p); } public static void replicate() { if (lock.tryAcquire()) { try { Thread t = new Thread(new Runnable() { public void run() { try { // here is where the magic happens } finally { lock.release(); } } }); t.start(); } catch (Exception e) { lock.release(); } } else { throw new IllegalStateException("already running a replicator"); } } public void run() { try { TimerRunner.replicate(); } finally { long delay = getDelay(); executor.schedule(new TimerRunner(), delay, TimeUnit.SECONDS); } } } This process is started by calling TimerRunner.initialize() when a server starts and calling TimerRunner.destroy(). I have created a full replication process (as opposed to incremental) that I would like to run at a certain moment of the day, say 2:00am. How would change the above code to do this? I think that it should be very simple something like if it is now around 2:00am and it's been a long time since I did the full replication then do it now, but I can't get the if right. Beware that sometimes the replicate process takes way longer to complete. Sometimes beyond the 15 minutes, posing a problem in running at around 2:00am.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • How do you fix issues with the debugger for the Android plug-in for Eclipse not attaching?

    - by user279112
    I have been trying to program something for the Android mobile phone, using Eclipse and the Android plug-in for that IDE, and my debugger used to attach just fine. But then it has suddenly started having consistent issues attaching. I just get that message about how the process is waiting for the debugger attach, and then it just won't. What determines whether the attachment glitches so seems to have something to do with what the code is that I'm trying to debug, as it seems to be drastically more of an issue with some versions of my code than with others (on the same app). How do I fix this? Now before you answer, please understand that I have researched this issue already. I have found a couple of solutions that have worked with other people, but which do not work for me. One of which is setting the debuggable property in the main manifest file as true, and the other is going into Dev Tools and into some settings menu, and from there selecting the process and essentially saying to the fake phone, "Debug this process". Neither has really worked. Any other ideas? And just in case...I've run into one blasted technical issue like this after another trying to program for that stupid phone. And I'm not the only one who's having these issues; when I go online to research these issues, it is always very easy for me to find many people who have the same issues, and who are having to use the shottiest, sloppiest, most "ghetto" solutions to work around these issues. I know that many people have created good applications for that phone, but I don't see how I'm supposed to do that when the SDK and the plug-in just don't work half the time. Does anybody know how I may put all this trash behind me, once and for all? Thanks for your answers to either question!

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • How to hand-over a TCP listening socket with minimal downtime?

    - by Shtééf
    While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too. Some background: I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script. My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally. Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production. The question: I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos. Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.) Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)

    Read the article

  • How can I work out what events are being waited for with WinDBG in a kernel debug session

    - by Benj
    I'm a complete WinDbg newbie and I've been trying to debug a WindowsXP problem that a customer has sent me where our software and some third party software prevent windows from logging off. I've reproduced the problem and have verified that only when our software and the customers software are both installed (although not necessarily running at logoff) does the log off problem occur. I've observed that WM_ENDSESSION messages are not reaching the running windows when the user tries to log off and I know that the third party software uses a kernel driver. I've been looking at the processes in WinDbg and I know that csrss.exe would normally send all the windows a WM_ENDSESSION message. When I ran: !process 82356020 6 To look at csrss.exe's stack I can see: WARNING: Frame IP not in any known module. Following frames may be wrong. 00000000 00000000 00000000 00000000 00000000 0x7c90e514 THREAD 8246d998 Cid 0248.02a0 Teb: 7ffd7000 Win32Thread: e1627008 WAIT: (WrUserRequest) UserMode Non-Alertable 8243d9f0 SynchronizationEvent 81fe0390 SynchronizationEvent Not impersonating DeviceMap e1004450 Owning Process 82356020 Image: csrss.exe Attached Process N/A Image: N/A Wait Start TickCount 1813 Ticks: 20748 (0:00:05:24.187) Context Switch Count 3 LargeStack UserTime 00:00:00.000 KernelTime 00:00:00.000 Start Address 0x75b67cdf Stack Init f80bd000 Current f80bc9c8 Base f80bd000 Limit f80ba000 Call 0 Priority 14 BasePriority 13 PriorityDecrement 0 DecrementCount 0 Kernel stack not resident. ChildEBP RetAddr Args to Child f80bc9e0 80500ce6 00000000 8246d998 804f9af2 nt!KiSwapContext+0x2e (FPO: [Uses EBP] [0,0,4]) f80bc9ec 804f9af2 804f986e e1627008 00000000 nt!KiSwapThread+0x46 (FPO: [0,0,0]) f80bca24 bf80a4a3 00000002 82475218 00000001 nt!KeWaitForMultipleObjects+0x284 (FPO: [Non-Fpo]) f80bca5c bf88c0a6 00000001 82475218 00000000 win32k!xxxMsgWaitForMultipleObjects+0xb0 (FPO: [Non-Fpo]) f80bcd30 bf87507d bf9ac0a0 00000001 f80bcd54 win32k!xxxDesktopThread+0x339 (FPO: [Non-Fpo]) f80bcd40 bf8010fd bf9ac0a0 f80bcd64 00bcfff4 win32k!xxxCreateSystemThreads+0x6a (FPO: [Non-Fpo]) f80bcd54 8053d648 00000000 00000022 00000000 win32k!NtUserCallOneParam+0x23 (FPO: [Non-Fpo]) f80bcd54 7c90e514 00000000 00000022 00000000 nt!KiFastCallEntry+0xf8 (FPO: [0,0] TrapFrame @ f80bcd64) This waitForMultipleObjects looks interesting because I'm wondering if csrss.exe is waiting on some event which isn't arriving to allow the logoff. Can anyone tell me how I might find out what event it's waiting for anything else I might do to further investigate the problem?

    Read the article

  • Powershell 2.0 Hang When Run From MsDeploy pre- post- ops using c/

    - by SonOfNun
    I am trying to invoke powershell during the preSync call in a MSDeploy command, but powershell does not exit the process after it has been called. The command (from command line): "tools/MSDeploy/msdeploy.exe" -verb:sync -preSync:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/MyInstallPath/deploy.ps1 Set-WebAppOffline Uninstall-Service ",waitInterval=60000 -usechecksum -source:dirPath="build/for-deployment" -dest:wmsvc=BLUEPRINT-X86,username=deployer,password=deployer,dirPath=C:/MyInstallPath I used a hack here (http://therightstuff.de/2010/02/06/How-We-Practice-Continuous-Integration-And-Deployment-With-MSDeploy.aspx) that gets the powershell process and kills it but that didn't work. I also tried taskkill and the sysinternals equivalent, but nothing will kill the process so that MSDeploy errors out. The command is executed, but then just sits there. Any ideas what might be causing powershell to hang like this? I have found a few other similar issues around the web but no answers. Environment is Win 2K3, using Powershell 2.0. UPDATE: Here is a .vbs script I use to invoke my powershell command now. Invoke using 'cscript.exe path/to/script.vbs': Option Explicit Dim oShell, appCmd,oShellExec Set oShell = CreateObject("WScript.Shell") appCmd = "powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command ""&{ . c:/development/materialstesting/deploy/web/deploy.ps1; Set-WebAppOffline }"" " Set oShellExec = oShell.Exec(appCmd) oShellExec.StdIn.Close()

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • Help a Beginner with a PHP based Login System

    - by Brian Lang
    I'm a bit embarrassed to say, but I've run into issue with creating a PHP based login system. I'm using a site template to handle the looks of the the login process, so I will spare you the code. Here is my thought process on how to handle the login: Create a simple login.php file. On there will be a form whose action is set to itself. It will check to see if the submit has been clicked, and if so validate to make sure the user entered a valid password / username. If they do, set a session variable save some login info (username, NOT password), and redirect them to a restricted area. If the login info isn't valid, save an error message in a session variable, display error message giving further instruction, and wait for the user to resubmit. Here is a chunk of what I have - hopefully one of you experts can see where I've gone wrong, and give me some insight: if(isset($_POST['submit'])) { if(!empty($_POST['username']) AND !empty(!$_POST['password'])) { header("Location: http://www.google.com"); } else { $err = 'All the fields must be filled in!'; } } if($err) { $_SESSION['msg']['login-err'] = $err; } ? Now the above is just an example - the intent of the above code is to process user input, with the script validating simply that the user has given input for username and password. If they have, I would like them, in this case, to be redirected to google.com (for the sake of this example). If not, save an error message. Given my current code, the error message will display perfectly, however if the user submits and has something entered for the username and password, the page simply doesn't redirect. I'm sure this is a silly question, but I am a beginner, and well, to be honest, a bit buzzed right now. Thanks so much!

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • WCF Web Service - Service Unavaiable

    - by born to hula
    I have a WCF Web Service which is kept under an Application Pool on IIS. Lately I've been getting "Service Unavaiable" when I'm trying to make calls to this Web Service. The first thing I tried to do was restarting the Application Pool. I did it and after a couple of seconds, it crashed and stopped. Looking at the Event Viewer, I found these messages, which by the moment couldn't help me to find where the problem is. A process serving application pool 'X' reported a failure. The process id was '11616'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. After getting a couple of these, I got this one: Application pool 'X' is being automatically disabled due to a series of failures in the process(es) serving that application pool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I've already checked permissions and Application Pool configurations but everything seems to be OK. Have anyone been through this? Thanks in advance.

    Read the article

  • Game architecture: modeling different steps/types of UI

    - by Sander
    I have not done any large game development projects, only messed around with little toy projects. However, I never found an intuitive answer to a specific design question. Namely, how are different types/states of UI modeled in games? E.g. how is a menu represented? How is it different from a "game world" state (let's use an FPS as an example). How is an overlaid menu on top of a "game world" modeled? Let's imagine the main loop of a game. Where do the game states come into play? It it a simple case-by-case approach? if (menu.IsEnabled) menu.Process(elapsedTime); if (world.IsEnabled) world.Process(elapsedTime); if (menu.IsVisible) menu.Draw(); if (world.IsVisible) world.Draw(); Or are menu and world represented somewhere in a different logic layer and not represented at this level? (E.g. a menu is just another high-level entity like e.g. player input or enemy manager, equal to all others) foreach (var entity in game.HighLevelEntities) entity.Process(elapsedTime); foreach (var entity in game.HighLevelEntities) entity.Draw(elapsedTime); Are there well-known design patterns for this? Come to think of it, I don't know any game-specific design patterns - I assume there are others, as well? Please tell me about them.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >