Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 593/717 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • CSS Table Formatting to a HTML Table

    - by Rurigok
    I am attempting to provide CSS formating to two HTML tables, but I cannot. I am setting up a webpage in HTML & CSS (with the CSS in an external sheet) and the layout of the website depends on the tables. There are 2 tables, one for the head and another for the body. They are set up whereas content is situated in one middle column of 60% width, with one column on each side of the center with 20% width each, along with other table formatting. My question is - how can I format the tables in CSS? I successfully formatted them in HTML, but this will not do. This is the CSS code for the tables - each table has the id layouttable: #layouttable{border:0px;width:100%;} #layouttable td{width:20%;vertical-align:top;} #layouttable td{width:60%;vertical-align:top;background-color:#E8E8E8;} #layouttable td{width:20%;vertical-align:top;} The tables in the html document both each have, in respective order, these elements (with content inside not shown): <table id="layouttable"><tr><td></td><td></td><td></td></tr></table> Does anyone have any idea why this CSS is not working, or can write some code to fix it? If further explanation is needed, please, ask.

    Read the article

  • Socket isn't listed by netstat unless using certain ports

    - by illuzive
    I'm a computer science student with a few years of programming experience. Yesterday, while working on a project (Mac OS X, BSD sockets) at school, I encountered a strange problem. I was adding several modules to a very basic "server" (mostly a bunch of functions to set up and manage an UDP socket on a certain port). While doing this, I started the server from time to time in order to see that everything worked like it should. I've been using port 32000 during the development of the server. When I start the server and run netstat, the socket is listed as expected. > netstat -p UDP | grep 32000 udp46 0 0 *.32000 *.* However, when I run the server on other ports (random (10000 - 50000)), it's not listed by netstat. My thought was that I had somehow hard coded the port somewhere in the code, but that's not the case. The thing is - I can connect to the socket on any of the tested ports, and it reads data sent to it without any problem at all. It just doesn't get listed by netstat. What I wonder, is if anyone of you have any idea of why this happens? Note: Although this is a project at school, it's not homework. This is just something I want to understand for my own benefit.

    Read the article

  • Wordpress loop not showing 'else' when no posts exist

    - by Reuben
    When no posts exist I'm used to seeing the message after the else, and I'm not sure why it's not showing it now? Code: <?php $args = array( 'post_type' => 'event', 'posts_per_page' => 1, 'post_status' => 'future', 'order' => 'ASC' ); $loop = new WP_Query( $args ); if ( have_posts() ) : while ( $loop->have_posts() ) : $loop->the_post(); ?> <div class="event_preview_title"><?php the_title(); ?></div> <div class="event_details"> <div class="event_date"><?php the_time('m/d/y'); ?></div> <div class="event_time"><?php the_time('g:i A'); ?></div> </div> <?php the_post_thumbnail( array(65,65) ); ?> <a class="clickthrough" href="<?php bloginfo('wpurl'); ?>/events"></a> <?php endwhile; else: ?> <p>Bluebird Books may be coming soon to a neighborhood near you!<br /> We are currently on hiatus planning our next season's schedule. New tour dates will be posted to this page once confirmed. Meanwhile, inquiries about appearances and programs are welcomed! If you are interested in having Bluebird visit your business, school, or special event, please contact us.</p> <?php endif; ?> Thanks!

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

  • Why does the '#weight' property sometimes not have any effect in Drupal forms?

    - by Adrian
    Hello, I'm trying to create a node form for a custom type. I have organic groups and taxonomy both enabled, but want their elements to come out in a non-standard order. So I've implemented hook_form_alter and set the #weight property of the og_nodeapi subarray to -1000, but it still goes after taxonomy and menu. I even tried changing the subarray to a fieldset (to force it to actually be rendered), but no dice. I also tried setting $form['taxonomy']['#weight'] = 1000 (I have two vocabs so it's already being rendered as a fieldset) but that didn't work either. I set the weight of my module very high and confirmed in the system table that it is indeed the highest module on the site - so I'm all out of ideas. Any suggestions? Update: While I'm not exactly sure how, I did manage to get the taxonomy fieldset to sink below everything else, but now I have a related problem that's hopefully more manageable to understand. Within the taxonomy fieldset, I have two items (a tags and a multi-select), and I wanted to add some instructions in hook_form_alter as follows: $form['taxonomy']['instructions'] = array( '#value' => "These are the instructions", '#weight' => -1, ); You guessed it, this appears after the terms inserted by the taxonomy module. However, if I change this to a fieldset: $form['taxonomy']['instructions'] = array( '#type' => 'fieldset', // <-- here '#title' => 'Instructions', // <-- and here for good measure '#value' => "These are the instructions", '#weight' => -1, ); then it magically floats to the top as I'd intended. I also tried textarea (this also worked) and explicitly saying markup (this did not). So basically, changing the type from "markup" (the default IIRC) to "fieldset" has the effect of no longer ignoring its weight.

    Read the article

  • Why am I returning empty records when querying in mysql with php?

    - by Brian Bolton
    I created the following script to query a table and return the first 30 results. The query returns 30 results, but they do not have any text or information. Why would this be? The table stores Vietnamese characters. The database is mysql4. Here's the page: http://saomaidanang.com/recentposts.php Here's the code: <?php header( 'Content-Type: text/html; charset=utf-8' ); //CONNECTION INFO $dbms = 'mysql'; $dbhost = 'xxxxx'; $dbname = 'xxxxxxx'; $dbuser = 'xxxxxxx'; $dbpasswd = 'xxxxxxxxxxxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpasswd ) or die('Error connecting to mysql'); mysql_select_db($dbname , $conn); //QUERY $result = mysql_query("SET NAMES utf8"); $cmd = 'SELECT * FROM `phpbb_posts_text` ORDER BY `phpbb_posts_text`.`post_subject` DESC LIMIT 0, 30 '; $result = mysql_query($cmd); ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html dir="ltr"> <head> <title>recent posts</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <p> <?php //DISPLAY while ($myrow = mysql_fetch_row($result)) { echo 'post subject:'; echo(utf8_encode($myrow ['post_subject'])); echo 'post text:'; echo(utf8_encode($myrow ['post_text'])); } ?> </p> </body>

    Read the article

  • Precompile assets for a rails engine

    - by Peter Ehrlich
    In a standard app, I have this line in my production.rb, which creates endpoints for non-default precompiled assets: config.assets.precompile += %w( mobile.css ) My rails engine is a standard Sinatra app. It has its own assets. When on development, these assets are served fine, presumably the web requests are handled by rails and sprockets. On production I'm getting 404s on the assets, and think I have to manually tell sprockets to provide the files. How can this be done without tightly linking? It isin't evident how to set up env-specific initializers for engines. Is this done? Not only, for example, is config/development.rb within the engine not loaded, but there's no way to get the application class itself without knowing its name, in order to modify configuration. And even if there was, it seems that having any engine able to reconfigure the main app would be very bad idea. So maybe its better to let assets handling be done by sinatra itself? Or another instance of sprockets for the engine? How do other engines handle this?

    Read the article

  • What is a reasonable OSGi development workflow?

    - by levand
    I'm using OSGi for my latest project at work, and it's pretty beautiful as far as modularity and functionality. But I'm not happy with the development workflow. Eventually, I plan to have 30-50 separate bundles, arranged in a dependency graph - supposedly, this is what OSGi is designed for. But I can't figure out a clean way to manage dependencies at compile time. Example: You have bundles A and B. B depends on packages defined in A. Each bundle is developed as a separate Java project. In order to compile B, A has to be on the javac classpath. Do you: Reference the file system location of project A in B's build script? Build A and throw the jar into B's lib directory? Rely on Eclipse's "referenced projects" feature and always use Eclipse's classpath to build (ugh) Use a common "lib" directory for all projects and dump the bundle jars there after compilation? Set up a bundle repository, parse the manifest from the build script and pull down the required bundles from the repository? No. 5 sounds the cleanest, but also like a lot of overhead.

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • Dealing with array of IntPtr

    - by Padu Merloti
    I think I'm close and I bet the solution is something stupid. I have a C++ native DLL where I define the following function: DllExport bool __stdcall Open(const char* filePath, int *numFrames, void** data); { //creates the list of arrays here... don't worry, lifetime is managed somewhere else //foreach item of the list: { BYTE* pByte = GetArray(i); //here's where my problem lives *(data + i * sizeofarray) = pByte; } *numFrames = total number of items in the list return true; } Basically, given a file path, this function creates a list of byte arrays (BYTE*) and should return a list of pointers via the data param. Each pointing to a different byte array. I want to pass an array of IntPtr from C# and be able to marshal each individual array in order. Here's the code I'm using: [DllImport("mydll.dll",EntryPoint = "Open")] private static extern bool MyOpen( string filePath, out int numFrames, out IntPtr[] ptr); internal static bool Open( string filePath, out int numFrames, out Bitmap[] images) { var ptrList = new IntPtr[512]; MyOpen(filePath, out numFrames, out ptrList); images = new Bitmap[numFrames]; var len = 100; //for sake of simplicity for (int i=0; i<numFrames;i++) { var buffer = new byte[len]; Marshal.Copy(ptrList[i], buffer, 0, len); images[i] = CreateBitmapFromBuffer(buffer, height, width); } return true; } Problem is in my C++ code. When I assign *(data + i * sizeofarray) = pByte; it corrupts the array of pointers... what am I doing wrong?

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • How to encapsulate a WinAPI application into a C++ class

    - by Semen Semenych
    There is a simple WinAPI application. All it does currently is this: register a window class register a tray icon with a menu create a value in the registry in order to autostart and finally, it checks if it's unique using a mutex As I'm used to writing code mainly in C++, and no MFC is allowed, I'm forced to encapsulate this into C++ classes somehow. So far I've come up with such a design: there is a class that represents the application it keeps all the wndclass, hinstance, etc variables, where the hinstance is passed as a constructor parameter as well as the icmdshow and others (see WinMain prototype) it has functions for registering the window class, tray icon, reigstry information it encapsulates the message loop in a function In WinMain, the following is done: Application app(hInstance, szCmdLIne, iCmdShow); return app.exec(); and the constructor does the following: registerClass(); registerTray(); registerAutostart(); So far so good. Now the question is : how do I create the window procedure (must be static, as it's a c-style pointer to a function) AND keep track of what the application object is, that is, keep a pointer to an Application around. The main question is : is this how it's usually done? Am I complicating things too much? Is it fine to pass hInstance as a parameter to the Application constructor? And where's the WndProc? Maybe WndProc should be outside of class and the Application pointer be global? Then WndProc invokes Application methods in response to various events.

    Read the article

  • Using WIndows PowerShell 1.0 or 2.0 to evaluate performance of executable files.

    - by Andry
    Hello! I am writing a simple script on Windows PowerShell in order to evaluate performance of executable files. The important hypothesisi is the following: I have an executable file, it can be an application written in any possible language (.net and not, Viual-Prolog, C++, C, everything that can be compiled as an .exe file). I want to profile it getting execution times. I did this: Function Time-It { Param ([string]$ProgramPath, [string]$Arguments) $Watch = New-Object System.Diagnostics.Stopwatch $NsecPerTick = (1000 * 1000 * 1000) / [System.Diagnostics.Stopwatch]::Frequency Write-Output "Stopwatch created! NSecPerTick = $NsecPerTick" $Watch.Start() # Starts the timer [System.Diagnostics.Process]::Start($ProgramPath, $Arguments) $Watch.Stop() # Stops the timer # Collectiong timings $Ticks = $Watch.ElapsedTicks $NSecs = $Watch.ElapsedTicks * $NsecPerTick Write-Output "Program executed: time is: $Nsecs ns ($Ticks ticks)" } This function uses stopwatch. Well, the functoin accepts a program path, the stopwatch is started, the program run and the stopwatch then stopped. Problem: the System.Diagnostics.Process.Start is asynchronous and the next instruction (watch stopped) is not executed when the application finishes. A new process is created... I need to stop the timer once the program ends. I thought about the Process class, thicking it held some info regarding the execution times... not lucky... How to solve this?

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • How to disable MSBuild's <RegisterOutput> target on a per-user basis?

    - by Roger Lipscombe
    I like to do my development as a normal (non-Admin) user. Our VS2010 project build fails with "Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions." Since I'm not at liberty to change the project file, is there any way that I can add user-specific MSBuild targets or properties that disable this step on a specific machine, or for a specific user? I'd prefer not to hack on the core MSBuild files. I don't want to change the project file because I might then accidentally check it back in. Nor do I want to hack on the MSBuild core files, because they might get overwritten by a service pack. Given that the Visual C++ project files (and associated .targets and .props files) have about a million places to alter the build order and to import arbitrary files, I was hoping for something along those lines. MSBuild imports/evaluates the project file as follows (I've only looked down the branches that interest me): Foo.vcxproj Microsoft.Cpp.Default.props Microsoft.Cpp.props $(UserRootDir)\Microsoft.Cpp.$(Platform).user.props Microsoft.Cpp.targets Microsoft.Cpp.$(Platform).targets ImportBefore\* Microsoft.CppCommon.targets The "RegisterOutput" target is defined in Microsoft.CppCommon.targets. I was hoping to replace this by putting a do-nothing "RegisterOutput" target in $(UserRootDir)\Microsoft.Cpp.$(Platform).user.props, which is %LOCALAPPDATA%\MSBuild\v4.0\Microsoft.Cpp.Win32.user.props (UserRootDir is set in Microsoft.Cpp.Default.props if it's not already set). Unfortunately, MSBuild uses the last-defined target, which means that mine gets overridden by the built-in one. Alternatively, I could attempt to set the %(Link.RegisterOutput) metadata, but I'd have to do that on all Link items. Any idea how to do that, or even if it'll work?

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • Appengine JDO dataclasses to python model

    - by M.A. Cape
    Does anyone have tried to implement an app in GAE having both java and python? I have an existing app and my front end is in java. Now I want to use the existing datastore to be interfaced by python. My problem is i don't know how to define the relationships and model that would be equivalent to the one in java. I have tried the one-to-many relationship in python but when stored in the datastore, the fields are different than the one-to-many of java. My data classes are as follows. //one-to-many owned Parent Class public class Parent{ @PrimaryKey @Persistent private String unitID; //some other fields... @Persistent @Order(extensions = @Extension(vendorName="datanucleus", key="list-ordering", value="dateCreated desc")) private List <Child> child; //methods & constructors were omitted } Child public class Child{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key uId; @Persistent private String name; /* etc... */ }

    Read the article

  • New desktop GUI developer; can choose any platform...

    - by alexantd
    I'm planning a client-server product for a tiny, low-volume, high-cost vertical market. One of the components of the product will be a desktop application, simple to moderate in complexity, for data entry and uploading to a central server from remote PCs and/or Macs via SOAP. The server is a Java web app. Customers will be choosing their platform (Windows or Mac) based on what the client app runs on, so my options are wide-open here. However, I will be developing on a Mac and have a strong allergy to MS-specific technologies (sorry). The app will not need to run on any non-desktop-computer devices and I have total freedom to say it will support X but not Y or Z without any negative consequences (quite the luxury, to be sure). I have a lot of experience in server-side development but very little in desktop GUI stuff, and am evaluating my options on the client - basically what do I want to commit to learning over the next 6+ months. I have server-side Java experience as well as a brief dabble in iPhone development, which went OK. Overall I'm looking for: Ease of learning & development IDE support Healthy surrounding ecosystem (libraries, tools, help, etc.) Quality documentation My options as I see them, in rough order of how I'm currently mentally ranking them: Java Swing Cocoa Java SWT JavaFX Adobe AIR XULRunner Am I leaving anything out?

    Read the article

  • Update table without using cursor and on date

    - by Muhammad Kashif Nadeem
    Please copy and run following script DECLARE @Customers TABLE (CustomerId INT) DECLARE @Orders TABLE ( OrderId INT, CustomerId INT, OrderDate DATETIME ) DECLARE @Calls TABLE (CallId INT, CallTime DATETIME, CallToId INT, OrderId INT) ----------------------------------------------------------------- INSERT INTO @Customers SELECT 1 INSERT INTO @Customers SELECT 2 ----------------------------------------------------------------- INSERT INTO @Orders SELECT 10, 1, DATEADD(d, -20, GETDATE()) INSERT INTO @Orders SELECT 11, 1, DATEADD(d, -10, GETDATE()) ----------------------------------------------------------------- INSERT INTO @Calls SELECT 101, DATEADD(d, -19, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 102, DATEADD(d, -17, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 103, DATEADD(d, -9, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 104, DATEADD(d, -6, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 105, DATEADD(d, -5, GETDATE()), 1, NULL ----------------------------------------------------------------- I want to update @Calls table and need following results. I am using the following query UPDATE @Calls SET OrderId = ( CASE WHEN (s.CallTime > e.OrderDate) THEN e.OrderId END ) FROM @Calls s INNER JOIN @Orders e ON s.CallToId = e.CustomerId and the result of my query is not what I need. Requirement: As you can see there are two orders. One is on 2010-12-12 and one is on 2010-12-22. I want to update @Calls table with relevant OrderId with respect to CallTime. In short If subsequent Orders are added, and there are further calls then we assume that a new call is associated with the most recent Order Note: This is sample data so this is not the case that I always have two Orders. There might be 10+ Orders and 100+ calls. Note2 I could not find good title for this question. Please change it if you think of any better. Thanks.

    Read the article

  • Append json data to html class name

    - by user2898514
    I have a problem with my json code. I want to append each json value comes from a key to be appended to an html class name which matches the key of json data. here's my Live demo if you see the result in the life demo. it's only appending the last record. is it possible to make it show all records in order? json var json = '[{"castle":"big","commercial":"large","common":"sergio","cultural":"2009"},' + '{"castle":"big2","commercial":"large2","common":"sergio2","cultural":"20092"}]'; html <div class="castle"></div> <div class="commercial"></div> <div class="common"></div> <div class="cultural"></div> javascript var data = $.parseJSON(json); $.each(data, function(l,v) { $.each(v, function(k,o) { $('.'+k).attr('id', k+o); console.log($('#'+k+o).attr('id')); $('#'+k+o).text(o); }); }); for more illustration... I want the result in the live demo to look like this big large sergio 2009, big2 large2 sergio2 20092

    Read the article

  • How to return ArrayList results from an IntentService

    - by gcl1
    I have an IntentService that loads up an ArrayList with data from a network source (AWS SDB tables). The ArrayList is in a global space -- accessible to both the calling Activity and the IntentService (like this: appState = ((App)getApplicationContext())). When the IntentService is done, it notifies the Activity through a ResultReceiver, and the Activity calls adapter.notifyDataChanged() to update the ListView. This solution works most of the time, ... but it violates the rule that only the UI thread should make changes to data underlying a ListView. So as it is, I sometimes get an error: "The content of the adapter has changed but ListView did not receive a notification." I think this must be a common situation. Please let me know if you have any suggestions or best practices for this problem. Here are three options I'm aware of: Keep the IntentService, and have it store the results in another "working" ArrayList, also in the global space. When the result is ready, the IntentService calls the ResultReceiver (on the UI thread), which can then: a) copy the result to the ArrayList associated with the ListView, and b) call adapter.notifyDataChanged(). CONS: I don't like the idea of putting temp/working data in a global space, and copying the result list seems inefficient. Keep the IntentService, and have it pass the results back through a bundle loaded with a ParcelableArrayList. CONS: I'm not sure if this approach would scale for very large result sets. It also requires copying the result list. Switch to a Service which builds a local copy of the result list. Have the Activity directly access the address space of the Service in order to read the result list. CON: Still requires copying results to the ArrayList associated with the ListView. Thank you.

    Read the article

  • SQL wont work? It doesn't come up with errors either

    - by Stefan
    Hey there, I have php function which checks to see if variables are set and then adds them onto my sql query. However I am don't seem to be getting any results back!? $where_array = array(); if (array_key_exists("location", $_GET)) { $location = addslashes($_GET['location']); $where_array[] = "`mainID` = '".$location."'"; } if (array_key_exists("gender", $_GET)) { $gender = addslashes($_GET["gender"]); $where_array[] = "`gender` = '".$gender."'"; } if (array_key_exists("hair", $_GET)) { $hair = addslashes($_GET["hair"]); $where_array[] = "`hair` = '".$hair."'"; } if (array_key_exists("area", $_GET)) { $area = addslashes($_GET["area"]); $where_array[] = "`locationID` = '".$area."'"; } $where_expr = ''; if ($where_array) { $where_expr = "WHERE " . implode(" AND ", $where_array); } $sql = "SELECT `postID` FROM `posts` ". $where_expr; $dbi = new db(); $result = $dbi->query($sql); $r = mysql_fetch_row($result); I'm trying to call the data after in a list like so: $dbi = new db(); $offset = ($currentpage - 1) * $rowsperpage; // get the info from the db $sql .= " ORDER BY `time` DESC LIMIT $offset, $rowsperpage"; $result = $dbi->query($sql); // while there are rows to be fetched... while ($row = mysql_fetch_object($result)){ // echo data echo $row['text']; } // end while Anyone got any ideas why I am not retrieving any data? -Stefan

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >