Search Results

Search found 3828 results on 154 pages for 'david steven'.

Page 109/154 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Little Employee/Shift timetable HELP!!!

    - by DAVID
    Morning Guys, I have the following tables: operator(ope_id, ope_name) ope_shift(ope_id, shift_id, shift_date) shift(shift_id, shift_start, shift_end) here is a better view of the data http://latinunit.net/emp_shift.txt here is the screenshot of a select statement to the tables http://img256.imageshack.us/img256/4013/opeshift.jpg im using this code SELECT OPE_ID, COUNT(OPE_ID) AS Total_shifts from operator_shift group by ope_id; to view the current total shifts per operator and it works, BUT if there was 500 more rows it would count them all aswell, THE QUESTION is, anyone has a better way of making my database work, or how can i tell the system that those rows are a whole month, i remember i friend said something about count then devide by 30 but im not sure, what if the month isnt finished? and you want to show the emp with highest shifts to date

    Read the article

  • Turning on multiple result sets in an ODBC connection to SQL Server

    - by David Griffiths
    I have an application that originally needed to connect to Sybase (via ODBC), but I've needed to add the ability to connect to SQL Server as well. As ODBC should be able to handle both, I thought I was in a good position. Unfort, SQL Server will not let me, by default, nest ODBC commands and ODBCDataReaders - it complains the connection is busy. I know that I had to specify that multiple active result sets (MARS) were allowed in similar circumstances when connecting to SQL Server via a native driver, so I thought it wouldn't be an issue. The DSN wizard has no entr y when creating a SystemDSN. Some people have provided registry hacks to get around this, but this did not work (add a MARS_Connection with a value of Yes to HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\system-dsn-name). Another suggestion was to create a file-dsn, and add "MARS_Connection=YES" to that. Didn't work. Finally, a DSN-less connection string. I've tried this one (using MultipleActiveResultSets - same variable as a Sql Server connection would use), "Driver={SQL Native Client};Server=xxx.xxx.xxx.xxx;Database=someDB;Uid=u;Pwd=p;MultipleActiveResultSets=True;" and this one: "Driver={SQL Native Client};Server=192.168.75.33\\ARIA;Database=Aria;Uid=sa;Pwd=service;MARS_Connection=YES;" I have checked the various connection-string sites - they all suggest what I've already tried.

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • Convert Google Analytics cookies to Local/Session Storage

    - by David Murdoch
    Google Analytics sets 4 cookies that will be sent with all requests to that domain (and ofset its subdomains). From what I can tell no server actually uses them directly; they're only sent with __utm.gif as a query param. Now, obviously Google Analytics reads, writes and acts on their values and they will need to be available to the GA tracking script. So, what I am wondering is if it is possible to: rewrite the __utm* cookies to local storage after ga.js has written them delete them after ga.js has run rewrite the cookies FROM local storage back to cookie form right before ga.js reads them start over Or, monkey patch ga.js to use local storage before it begins the cookie read/write part. Obviously if we are going so far out of the way to remove the __utm* cookies we'll want to also use the Async variant of Analytics. I'm guessing the down vote was because I didn't ask a question. DOH! My questions are: Can it be done as described above? If so, why hasn't it been done? I have a default HTML/CSS/JS boilerplate template that passes YSlow, PageSpeed, and Chrome's Audit with near perfect scores. I'm really looking for a way to squeeze those remaining cookie bytes from Google Analytics in browsers that support local storage.

    Read the article

  • Creating A Single Threaded Server with AnyEvent (Perl)

    - by David Williams
    I'm working on creating a local service to listen on localhost and provide a basic call and response type interface. What I'd like to start with is a baby server that you can connect to over telnet and echoes what it receives. I've heard AnyEvent is great for this, but the documentation for AnyEvent::Socket does not give a very good example how to do this. I'd like to build this with AnyEvent, AnyEvent::Socket and AnyEvent::Handle. Right now the little server code looks like this: #!/usr/bin/env perl use AnyEvent; use AnyEvent::Handle; use AnyEvent::Socket; my $cv = AnyEvent->condvar; my $host = '127.0.0.1'; my $port = 44244; tcp_server($host, $port, sub { my($fh) = @_; my $cv = AnyEvent->condvar; my $handle; $handle = AnyEvent::Handle->new( fh => $fh, poll => "r", on_read => sub { my($self) = @_; print "Received: " . $self->rbuf . "\n"; $cv->send; } ); $cv->recv; }); print "Listening on $host\n"; $cv->wait; This doesn't work and also if I telnet to localhost:44244 I get this: EV: error in callback (ignoring): AnyEvent::CondVar: recursive blocking wait attempted at server.pl line 29. I think if I understand how to make a mini, single threaded server that simply prints out whatever its given and then waits for more input, I could take it a lot further from there. Any ideas?

    Read the article

  • Is there a good argument for software patents?

    - by David Nehme
    Now that it looks like software patents are going to be severely limited, does anyone have a good argument for keeping them. It seems like copyright law serves software fine and patents just add overhead to what should be an almost frictionless process. Are there any examples of software that wouldn't have been written if not for patents?

    Read the article

  • How can I iterate over a recordset within a stored procedure?

    - by David
    I need to iterate over a recordset from a stored procedure and execute another stored procedure using each fields as arguments. I can't complete this iteration in the code. I have found samples on the internets, but they all seem to deal with a counter. I'm not sure if my problem involved a counter. I need the T-SQL equivalent of a foreach Currently, my first stored procedure stores its recordset in a temp table, #mytemp. I assume I will call the secondary stored procedure like this: while (something) execute nameofstoredprocedure arg1, arg2, arg3 end

    Read the article

  • Update Azure Service Configuration File using Powershell

    - by David Osborn
    I'm trying to write a powershell script that updats each of the DiagnosticsConnectionString and DataConnectionString values below, but I can't seem to find each individual Role node using $serviceconfig.ServiceConfiguration.SelectSingleNode("Role[@name='MyService_WorkerRole']") doing echo $serviceconfig.ServiceConfiguration.Role lists out both Role nodes for me so I know it is working up to that point, but after that I am not having much success. where $serviceConfig contains the below XML: <?xml version="1.0"?> <ServiceConfiguration serviceName="MyService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="MyService_WorkerRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="really long string" /> <Setting name="DataConnectionString" value="really long string 2" /> </ConfigurationSettings> </Role> <Role name="MyService_WebRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="really long string 3" /> <Setting name="DataConnectionString" value="really long string 4" /> </ConfigurationSettings> </Role> </ServiceConfiguration>

    Read the article

  • Serialization of non-required fields in protobuf-net

    - by David Hedlund
    I have a working java client that is communicating with Google, through ProtoBuf serialized messages. I am currently trying to translate that client into C#. I have a .proto file where the parameter appId is an optional string. Its default value in the C# representation as generated by the protobuf-net library is an empty string, just as it is in the java representation of the same file. message AppsRequest { optional AppType appType = 1; optional string query = 2; optional string categoryId = 3; optional string appId = 4; optional bool withExtendedInfo = 6; } I find that when I explicitly set appId to "" in the java client, the client stops working (403 Bad Request from Google). When I explicitly set appId to null in the java client, everything works, but only because hasAppId is being set to false (I'm uncertain as to how that affects the serialization). In the C# client, I always get 403 responses. I don't see any logic behind the distinction between not setting a value, and setting the default value, that seems to make all the difference in the java client. Since the output is always a binary stream, I am not sure if the successful java messages are being serialized with an empty string, or not serialized at all. In the C# client, I've tried setting IsRequired to true on the ProtoMember attribute, to force them to serialize, and I've tried setting the default value to null, and explicitly set "", so I'm quite sure I've tried some configuration where the value is being serialized. I've also played around with ProtoBuf.ProtoIgnore and at some point, removing the appId parameter altogether, but I haven't been able to avoid the 403 errors in C#. I've tried manually copying the serialized string from java, and that resolved my issues, so I'm certain that the rest of the HTTP Request is working, and the error can be traced to the serialized object. My serialization is simply this: var clone = ProtoBuf.Serializer.DeepClone(request); MemoryStream ms = new MemoryStream(2000); ProtoBuf.Serializer.Serialize(ms, clone); var bytearr = ms.ToArray(); string encodedData = Convert.ToBase64String(bytearr); I'll admit to not being quite sure about what DeepClone does. I've tried both with and without it...

    Read the article

  • Entity Framework 4 - Handling very large (1000+ tables) data models?

    - by David Kreps
    We've got a database with over 1000+ tables and would like to consider using EF4 for our data access layer, but I'm concerned about the practical realities of using it for such a large data model. I've seen this question and read about the suggested solutions here and here. These may work, but appear to refer to the first version of the Entity Framework (and are more complex than I'd like). Does anyone know if these solutions have been improved upon in EF4? Or have other suggestions all together? Thanks.

    Read the article

  • ServicedComponent not being disposed in finaliser

    - by David Gray Wright
    Questions needing answers : Does the finalizer of the client side ServicedComponent call ServicedComponent.DisposeObject or Dispose? How should destruction (release of memory) occur in the com server in realtion to its usage in the client? Basically - we are reaching a 2 gig limit on process size (memory) of the COM server as memory is not being released - is the solution to call explicitly call Dispose or use the using statement in the client?

    Read the article

  • Silverlight 4 business application themes

    - by David Brunelle
    Hi, We are starting a new SilverLight 4 Business Application project and are looking for theme. All we can find on the web are Navigation Application themes, which when applied to business application project, don't work. Most even have compilation errors. Is there a place on the web to get theme specifically for that project or is there a way to translate navigation application theme into business application theme? Thank you

    Read the article

  • Did I implement clock drift properly?

    - by David Titarenco
    I couldn't find any clock drift RNG code for Windows anywhere so I attempted to implement it myself. I haven't run the numbers through ent or DIEHARD yet, and I'm just wondering if this is even remotely correct... void QueryRDTSC(__int64* tick) { __asm { xor eax, eax cpuid rdtsc mov edi, dword ptr tick mov dword ptr [edi], eax mov dword ptr [edi+4], edx } } __int64 clockDriftRNG() { __int64 CPU_start, CPU_end, OS_start, OS_end; // get CPU ticks -- uses RDTSC on the Processor QueryRDTSC(&CPU_start); Sleep(1); QueryRDTSC(&CPU_end); // get OS ticks -- uses the Motherboard clock QueryPerformanceCounter((LARGE_INTEGER*)&OS_start); Sleep(1); QueryPerformanceCounter((LARGE_INTEGER*)&OS_end); // CPU clock is ~1000x faster than mobo clock // return raw return ((CPU_end - CPU_start)/(OS_end - OS_start)); // or // return a random number from 0 to 9 // return ((CPU_end - CPU_start)/(OS_end - OS_start)%10); } If you're wondering why I Sleep(1), it's because if I don't, OS_end - OS_start returns 0 consistently (because of the bad timer resolution, I presume). Basically, (CPU_end - CPU_start)/(OS_end - OS_start) always returns around 1000 with a slight variation based on the entropy of CPU load, maybe temperature, quartz crystal vibration imperfections, etc. Anyway, the numbers have a pretty decent distribution, but this could be totally wrong. I have no idea.

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

  • PostgreSQL pgdb driver raises "can't rollback" exception

    - by David Parunakian
    Hello, for some reason I'm experiencing the Operational Error with "can't rollback" message when I attempt to roll back my transaction in the following context: try: cursors[instance].execute("lock revision, app, timeout IN SHARE MODE") cursors[instance].execute("insert into app (type, active, active_revision, contents, z) values ('session', true, %s, %s, 0) returning id", (cRevision, sessionId)) sAppId = cursors[instance].fetchone()[0] cursors[instance].execute("insert into revision (app_id, type) values (%s, 'active')", (sAppId,)) cursors[instance].execute("insert into timeout (app_id, last_seen) values (%s, now())", (sAppId,)) connections[instance].commit() except pgdb.DatabaseError, e: connections[instance].rollback() return "{status: 'error', errno:4, errmsg: \"%s\"}"%(str(e).replace('\"', '\\"').replace('\n', '\\n').replace('\r', '\\r')) The driver in use is PGDB. What is fundamentally wrong here?

    Read the article

  • ActionScript: Determine wether superclass implements a particular interface?

    - by David Wolever
    Is there any non-hacky way to determine wether a class' superclass implements a particular interface? For example, assume I've got: class A extends EventDispatcher implements StuffHolder { private var myStuff = someKindOfStuff; public function getStuff():Array { if (super is StuffHolder) // <<< this doesn't work return super['getStuff']().concat([myStuf]); return [myStuff]; } class B extends A { private var myStuff = anotherKindOfStuff; } How could I perform that super is StuffHolder test in a way that, well, works? In this case, it always returns true.

    Read the article

  • Change page layout in the middle of a LaTeX document

    - by David
    I'm looking for a way to change some page layout dimensions in the middle of a LaTeX document. The reason is, I'd like to have smaller margins and longer lines in the "References" section of my report document (basically because short lines aren't so important there and I can save space). In my preamble I have (works fine): \setlength\textwidth{130mm} \setlength\oddsidemargin{14.6mm} In the document I simply try to re-set them at the right point but they're ignored. ... last paragraph ends here. \newpage \setlength\textwidth{150mm} % +20mm text width \setlength\oddsidemargin{13.6mm} % -10mm left margin (so it stays centered) References ... The geometry package is useful but only for global adjustments in the preamble, so I can't use it here. Is it not possible to change the page layout at some point in the document?

    Read the article

  • Creating a Cocoa Framework

    - by David Schiefer
    Hi, I've created a working Cocoa framework which I wish to redistribute. The problem is however, it won't run outside of Xcode. I've read something about @executable_path/../Frameworks, which I did not include, because I don't know where to put it :/ Therefore I run my app in Xcode using the DYLD_FRAMEWORK_PATH variable which works fine, but only in Xcode - if I try to run it on its own it crashes straight away and says IMAGE NOT FOUND. I'm sure @executable_path/../Frameworks is what's missing, but I don't know where to put it. Could anyone help me out please? :) Thanks

    Read the article

  • Problems migrating databinding in VB.NET from Winforms to ASP.NET 2.0

    - by David
    And this was supposed to be so easy... I have existing business and data access layers that handle the retrieval and update of the data in question. These work great with the existing Winforms application (.Net V2.0) Now, in trying to write a new web-based UI, I'm running into all sorts of problems (last time I wrote asp.net code was in 1.1). Specifically, I can't data bind a text box to a business object. Oh, sure there's the ObjectDataSource but that wants to know how to do CRUD operations on the data. What I'm looking for is something that acts like the 'classic' binding objects so that, in my code, it's as simple as retrieving the object and doing a a refresh. The data component like FormView and DetailsView are so generic-looking that it's ridiculous. The existing application would have tabbed dialogs, text boxes grouped by panels, etc. On top of that, I have a directive to use master pages and unless one control causes it, I can't seem to get the content section to expand. I can't just put a text box 'below' the bottom of "Content1" and have it resize the content section - which gives me the same results as an earlier question I posted when the footer wasn't being 'pushed down' - relative position solved that but doesn't seem to solve it with placing small text boxes in the area. What I want is fairly simple. Something like: bindingobject.datasource = businessdataobject bindingobject.refresh ...and have the text boxes refresh with the new values. Likewise to have 'businessdataobject' properties updated as the user enters new data. I was able to do this with the GridView (grdRequests.DataSource = lstRequests) by making a list of asp:BoundField tags inside the collection of the GridView. Am I tilting at windmills here?

    Read the article

  • Module Adminhtml blocks not loading

    - by David Tay
    I was working on a Magento module and it was working fine. At some point, I was trying to enable WYSIWYG in an edit form 'content' field and suddenly, my adminhtml grid and edit blocks stopped being generated. On my system are TinyMCE and Fontis FCKEditor WYSIWYG editors extensions. I'm not sure what I did wrong but my adminhtml blocks will no longer generate. Here's a dump of all the blocks from my module's adminhtml layout: array(17) { [0]=> string(4) "root" [1]=> string(4) "head" [2]=> string(13) "head.calendar" [3]=> string(14) "global_notices" [4]=> string(6) "header" [5]=> string(4) "menu" [6]=> string(11) "breadcrumbs" [7]=> string(7) "formkey" [8]=> string(12) "js_translate" [9]=> string(4) "left" [10]=> string(7) "content" [11]=> string(8) "messages" [12]=> string(2) "js" [13]=> string(6) "footer" [14]=> string(8) "profiler" [15]=> string(15) "before_body_end" [16]=> string(7) "wysiwyg" } As you can see, the last item is "wysiwyg" but on the layout output of other magento modules, there are more blocks. For example, on MathieuF's calendar extension, these are all the layout blocks: array(26) { [0]=> string(4) "root" [1]=> string(4) "head" [2]=> string(13) "head.calendar" [3]=> string(14) "global_notices" [4]=> string(6) "header" [5]=> string(4) "menu" [6]=> string(11) "breadcrumbs" [7]=> string(7) "formkey" [8]=> string(12) "js_translate" [9]=> string(4) "left" [10]=> string(7) "content" [11]=> string(8) "messages" [12]=> string(2) "js" [13]=> string(6) "footer" [14]=> string(8) "profiler" [15]=> string(15) "before_body_end" [16]=> string(7) "wysiwyg" [17]=> string(27) "adminhtml_event.grid.child0" [18]=> string(12) "ANONYMOUS_19" [19]=> string(27) "adminhtml_event.grid.child1" [20]=> string(12) "ANONYMOUS_21" [21]=> string(27) "adminhtml_event.grid.child2" [22]=> string(20) "adminhtml_event.grid" [23]=> string(12) "ANONYMOUS_24" [24]=> string(19) "ANONYMOUS_17.child1" [25]=> string(14) "content.child0" } Does anyone have any idea what's wrong? I've already tried Alan Storm's Layout and Config Viewers and cannot find any clues as to what I did wrong. Any help would be greatly appreciated.

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • Clickonce redirect

    - by David Hagan
    Is it possible to deploy an application (using an existing clickonce deployment URL, such that users update to that version) which changes the deployment URL of the deployed application? The scenario is that I have a deployed client (A), which is stable and has been in use for over a year, and a new client (B), which is in development and will be used. However, B and A have different UIDs so that they can both be deployed on the same system together. At some point in the future, I'd like to automatically migrate users who have been using A to B, but I'd hope that Clickonce is well-designed enough to not upgrade A to B if I place B's install files in A's install directory (because it should be checking those UIDs). I know that a C# application that's been deployed through clickonce has some access to its own deployment method, and I'm wondering whether I'm able to change the upgrade-location. I'm hoping to do this quietly without much involvement of the user (and I understand that quiet redirects are heavily frowned upon, for good reasons) and am wondering whether anyone has any experience of trying to modify an installed clickonce's deployment/upgrade information with an update.

    Read the article

  • [iText] Inserting Image onCloseDocument

    - by David
    I'm trying to insert an image in the footer of my document using iText's onCloseDocument event. I have the following code: public void onCloseDocument(PdfWriter writer, Document document) { PdfContentByte pdfByte = writer.getDirectContent(); try { // logo is a non-null global variable Image theImage = new Jpeg(logo); pdfByte.addImage(theImage, 400.0f, 0.0f, 0.0f, 400.0f, 0.0f, 0.0f); } catch (Exception e) { e.printStackTrace(); } } The code throws no exceptions, but it also fails to insert the image. This identical code is used onOpenDocument to insert the same logo. The only difference between the two methods are the coordinates in pdfByte.addImage. However, I've tried quite a few different coordinations in onCloseDocument and none of them appear anywhere in my document. Is there any troubleshooting technique for detecting content which is displayed off-page in a PDF? If not, can anyone see the problem with my onCloseDocument method? Edit: As a followup, it seems that onDocumentClose puts its content on page document.length() + 1 (according to its API). However, I don't know how to change the page number back to document.length() and place my logo on the last page.

    Read the article

  • using FUNCTION instead of CREATE FUNCTION oracle pl/sql

    - by sqlgrasshopper5
    I see people writing a function with FUNCTION instead "CREATE FUNCTION". When I saw this usage in the web I thought it was a typo or something. But in Oreilly's "Oracle 11g PL/SQL Programming" by Steven Feurenstein, the author had used the same thing. But I get errors when I execute that. Could somebody explain is it legal usage or not?. Thanks.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >