Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 608/717 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • formatting mysql data for ouptut into a table

    - by bsandrabr
    Following on from a question earlier today this answer was given to read the data into an array and separate it to print vehicle type and then some data for each vehicle. <?php $sql = "SELECT * FROM apparatus ORDER BY vehicleType"; $getSQL = mysql_query($sql); // transform the result set: $data = array(); while ($row = mysql_fetch_assoc($getSQL)) { $data[$row['vehicleType']][] = $row; } ?> <?php foreach ($data as $type => $rows): ?> <h2><?php echo $type?></h2> <ul> <?php foreach ($rows as $vehicleData):?> <li><?php echo $vehicleData['name'];?></li> <?php endforeach ?> </ul> <?php endforeach ?> This is almost perfect for what I want to do but I need to print out two columns from the database ie ford and mondeo before going into the second foreach loop. I've tried print $rows['model'] and all the other combinations I can think of but that doesn't work. Any help much appreciated

    Read the article

  • How to avoid notice in php when one of the conditions is not true

    - by user225269
    I've notice that when one of the two conditions in a php if statement is not true. You get an undefined index notice for the statement that is not true. And the result in my case is a distorted web page. For example, this code: <?php session_start(); if (!isset($_SESSION['loginAdmin']) && ($_SESSION['loginAdmin'] != '')) { header ("Location: loginam.php"); } else { include('head2.php'); } if (!isset($_SESSION['login']) && ($_SESSION['login'] != '')) { header ("Location: login.php"); } else { include('head3.php'); } ?> If one of the if statements is not true. The one that is not true will give you a notice that it is undefined. In my case it says that the session 'login' is not defined. If session 'LoginAdmin' is used. What can you recommend that I would do in order to avoid these undefined index notice.

    Read the article

  • How can I distribute x cakes amongst y people?

    - by Rupert
    I have an associative array with the ids of x cakes and another associative array with the ids of y people. I want to ensure that each cake is enjoyed by exactly 2 people and that each person gets a fair share of cake overall. However, cakes must be kept whole (i.e. if the average cake per person is a fraction, this fraction will be rounded up for some, and down for others). No person can be assigned the same cake twice. For example: $cake = array('id'=>'1','id'=>'2') $people = array('id'=>'1','id'=>'2','id'=>'3') In order to do so, I wish to create a new array where each row represents a cake-person assignment. As each cake is being assigned to two people, the number of rows in this table should be exactly twice the number of cakes. There will not be exactly 1 solution to this problem but a solution to the above example would be: $cake_person = array( '1'=>array('cake_id'=>'1', 'person_id'=>'1'), '2'=>array('cake_id'=>'1', 'person_id'=>'2'), '3'=>array('cake_id'=>'2', 'person_id'=>'2'), '4'=>array('cake_id'=>'2', 'person_id'=>'3'), ) Notice that people 1 and 3 are losing out but that is because there is no more cake to go around! Each cake must be given exactly twice. How can I generate such a solution reliably for larger numbers of people and cakes?

    Read the article

  • How would I compare two Lists(Of <CustomClass>) in VB?

    - by Kumba
    I'm working on implementing the equality operator = for a custom class of mine. The class has one property, Value, which is itself a List(Of OtherClass), where OtherClass is yet another custom class in my project. I've already implemented the IComparer, IComparable, IEqualityComparer, and IEquatable interfaces, the operators =, <>, bool and not, and overriden Equals and GetHashCode for OtherClass. This should give me all the tools I need to compare these objects, and various tests comparing two singular instances of these objects so far checks out. However, I'm not sure how to approach this when they are in a List. I don't care about the list order. Given: Dim x As New List(Of OtherClass) From {New OtherClass("foo"), New OtherClass("bar"), New OtherClass("baz")} Dim y As New List(Of OtherClass) From {New OtherClass("baz"), New OtherClass("foo"), New OtherClass("bar")} Then (x = y).ToString should print out True. I need to compare the same (not distinct) set of objects in this list. The list shouldn't support dupes of OtherClass, but I'll have to figure out how to add that in later as an exception. Not interested in using LINQ. It looks nice, but in the few examples I've played with, adds a performance overhead in that bugs me. Loops are ugly, but they are fast :) A straight code answer is fine, but I'd like to understand the logic needed for such a comparison as well. I'm probably going to have to implement said logic more than a few times down the road.

    Read the article

  • Iteratively creating multiple file input fields in Rails

    - by David
    I have a column of product views in a database (e.g. top, bottom, front, back). I'm trying to generate a series of file inputs to allow the user to upload an image for each view. This is the result I'm after: ... <label>Top</label> <input type="file" name="image[Top]"><br> <label>Bottom</label> <input type="file" name="image[Bottom]"><br> <label>Front</label> <input type="file" name="image[Front']"><br> ... This is what I'm trying: <%= views = View.order('name ASC').all.map { |view| [view.name, view.id] } %> <%= views.each { |view| label(view); file_field('image', view) } %> However, all this does is print out the views array a couple of times. Hopefully you Rails experts can point me in the right direction. (I apologize in advance if I'm butchering Ruby.)

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • Downloading From Google Docs

    - by jeremynealbrown
    Hello, I am using the gdata-java-client Version 2 with for and Android app that allows users to download documents from their Google Docs account. Currently I am able to authenticate, request and display a list of all the user's documents. From here I would like to open each type of document in a specific Activity. If it's a spreadsheet or a csv file, open it in one activity and if it is a text document open it in another activity. This is where things are getting hazy. First I need to determine what type of document the user selected in order download the file in the appropriate format by appending exporFormat=(csv,xls,doc,txt) to the query string. I don't see any indication in the original list of documents as to what kind of file the each entry is. Secondly as a test I can just append a raw string to the end of the query string. As an example, a query might look like this: https://spreadsheets.google.com/feeds/download/spreadsheets/Export?key=0AsE_6_YIr797dHBTUWlHMUFXeTV4ZzJlUGxWRnJXanc&exportFormat=xls Notice that at the end of the string is the hardcoded export format. This query returns a HTTPResponse with a 200 OK message. However if I look at the response.content or use response.parseAsString I see what appears to be a Google Docs home page has html text. I don't get this result when I try to download a text document. When I request a text document the response.content is the body of the text file. If I copy and paste this uri into a browser I get the requested file as a download. To summarize, this question is two-fold: 1. How do I determine the type( plain text, .doc, .csv, .xls ) of a document from the initial list of user documents. 2. How do I download the actual .csv or spreadsheet files? Thanks in advance.

    Read the article

  • Process a set of files from a source directory to a destination directory in Python

    - by Spoike
    Being completely new in python I'm trying to run a command over a set of files in python. The command requires both source and destination file (I'm actually using imagemagick convert as in the example below). I can supply both source and destination directories, however I can't figure out how to easily retain the directory structure from the source to the destination directory. E.g. say the srcdir contains the following: srcdir/ file1 file3 dir1/ file1 file2 Then I want the program to create the following destination files on destdir: destdir/file1, destdir/file3, destdir/dir1/file1 and destdir/dir1/file2 So far this is what I came up with: import os from subprocess import call srcdir = os.curdir # just use the current directory destdir = 'path/to/destination' for root, dirs, files in os.walk(srcdir): for filename in files: sourceFile = os.path.join(root, filename) destFile = '???' cmd = "convert %s -resize 50%% %s" % (sourceFile, destFile) call(cmd, shell=True) The walk method doesn't directly provide what directory the file is under srcdir other than concatenating the root directory string with the file name. Is there some easy way to get the destination file, or do I have to do some string manipulation in order to do this?

    Read the article

  • Matlab GUI: How to Save the Results of Functions (states of application)

    - by niko
    Hi, I would like to create an animation which enables the user to go backward and forward through the steps of simulation. An animation has to simulate the iterative process of channel decoding (a receiver receives a block of bits, performs an operation and then checks if the block corresponds to parity rules. If the block doesn't correspond the operation is performed again and the process finally ends when the code corresponds to a given rules). I have written the functions which perform the decoding process and return a m x n x i matrix where m x n is the block of data and i is the iteration index. So if it takes 3 iterations to decode the data the function returns a m x n x 3 matrix with each step is stired. In the GUI (.fig file) I put a "decode" button which runs the method for decoding and there are buttons "back" and "forward" which have to enable the user to switch between the data of recorded steps. I have stored the "decodedData" matrix and currentStep value as a global variable so by clicking "forward" and "next" buttons the indices have to change and point to appropriate step states. When I tried to debug the application the method returned the decoded data but when I tried to click "back" and "next" the decoded data appeared not to be declared. Does anyone know how is it possible to access (or store) the results of the functions in order to enable the described logic which I want to implement in Matlab GUI?

    Read the article

  • mysql data read returning 0 rows from class

    - by Neo
    I am implementing a database manager class within my app, mainly because there are 3 databases to connect to one being a local one. However the return function isn't working, I know the query brings back rows but when it is returned by the class it has 0. What am I missing? public MySqlDataReader localfetchrows(string query, List<MySqlParameter> dbparams = null) { using (var conn = connectLocal()) { Console.WriteLine("Connecting local : " + conn.ServerVersion); MySqlCommand sql = conn.CreateCommand(); sql.CommandText = query; if (dbparams != null) { if (dbparams.Count > 0) { sql.Parameters.AddRange(dbparams.ToArray()); } } MySqlDataReader reader = sql.ExecuteReader(); Console.WriteLine("Reading data : " + reader.HasRows + reader.FieldCount); return reader; /* using (MySqlCommand sql = conn.CreateCommand()) { sql.CommandText = query; if (dbparams != null) { if (dbparams.Count > 0) { sql.Parameters.AddRange(dbparams.ToArray()); } } MySqlDataReader reader = sql.ExecuteReader(); Console.WriteLine("Reading data : " + reader.HasRows + reader.FieldCount); sql.Parameters.Clear(); return reader; }*/ } } And the code to get the results query = @"SELECT jobtypeid, title FROM jobtypes WHERE active = 'Y' ORDER BY title ASC"; //parentfrm.jobtypes = db.localfetchrows(query); var rows = db.localfetchrows(query); Console.WriteLine("Reading data : " + rows.HasRows + rows.FieldCount); while (rows.Read()){ } These scripts return the following : Connecting local : 5.5.16 Reading data : True2 Reading data : False0

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • Query Parameter Value Is Null When Enum Item 0 is Cast to Int32

    - by Timothy
    When I use the first item in a zero-based Enum cast to Int32 as a query parameter, the parameter value is null. I've worked around it by simply setting the first item to a value of 1, but I was wondering though what's really going on here? This one has me scratching my head. Why does the parameter regarded the value as null, instead of 0? Enum LogEventType : int { SignIn, SignInFailure, SignOut, ... } private static DataTable QueryEventLogSession(DateTime start, DateTime stop) { DataTable entries = new DataTable(); using (FbConnection conn = new FbConnection(DSN)) { using (FbDataAdapter adapter = new FbDataAdapter( "SELECT event_type, event_timestamp, event_details FROM event_log " + "WHERE event_timestamp BETWEEN @start AND @stop " + "AND event_type IN (@signIn, @signInFailure, @signOut) " + "ORDER BY event_timestamp ASC", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new FbParameter("@start", start), new FbParameter("@stop", stop), new FbParameter("@signIn", (Int32)LogEventType.SignIn), new FbParameter("@signInFailure", (Int32)LogEventType.SignInFailure), new FbParameter("@signOut", (Int32)LogEventType.SignOut)}); Trace.WriteLine(adapter.SelectCommand.CommandText); foreach (FbParameter p in adapter.SelectCommand.Parameters) { Trace.WriteLine(p.Value.ToString()); } adapter.Fill(entries); } } return entries; }

    Read the article

  • Does anybody have any tips for managing polymorphic nested resources in Rails 3?

    - by Ryan
    in config/routes.rb: resources posts do resources comments end resources pictures do resources comments end I would like to allow for more things to be commented on as well. I'm currently using mongoid (mongomapper isn't as compatible with rails3 yet as I would like), and comments are an embedded resource (mongoid can't yet handle polymorphic relational resources), which means that I do need the parent resource in order to find the comment. Are there any elegant ways to handle some of the following problems: in my controller, I need to find the parent before finding the comment. if params[:post_id] parent = Post.find(params[:post_id] else if params[:picture_id] parent = Picture.find(params[:picture_id] end which is going to get messy if I start adding more things to be commentable also url_for([comment.parent,comment]) doesn't work, so I'm going to have to define something in my Comment model, but I think I'm also going to need to define an index route in the Comment model as well as potentially an edit and new route definition. There might be more issues that I have to deal with as I get further. I can't imagine I'm the first person to try and solve this problem, are there any solutions out there to make this more manageable?

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • For each result in MySQL query, push to array (complicated)

    - by Dylan Taylor
    Okay, here's what I'm trying to do. I am running a MySQL query for the most recent posts. For each of the returned rows, I need to push the ID of the row to an array, then within that ID in the array, I need to add more data from the rows. A multi-dimensional array. Here's my code thus far. $query = "SELECT * FROM posts ORDER BY id DESC LIMIT 10"; $result = mysql_query($query); while($row = mysql_fetch_array($result)){ $id = $row["id"]; $post_title = $row["title"]; $post_text = $row["text"]; $post_tags = $row["tags"]; $post_category = $row["category"]; $post_date = $row["date"]; } As you can see I haven't done anything with arrays yet. Here's an ideal structure I'm looking for, just incase you're confused. The master array I guess you could call it. We'll just call this array $posts. Within this array, I have one array for each row returned in my MySQL query. Within those arrays there is the $post_title, $post_text, etc. How do I do this? I'm so confused.. an example would be really appreciated. -Dylan

    Read the article

  • trying to backup mysql database using php

    - by user225269
    I got this code from this site: http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/using-php-to-backup-mysql-databases.aspx But I'm just a beginner so I don't know what the config.php and opendb.php suppose to mean. Do I have to create those 2 files in order for this code to work? If yes, then how do I create it, it isn't included in the site how to create it. <?php include 'config.php'; include 'opendb.php'; $tableName = 'mypet'; $backupFile = 'backup/mypet.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); include 'closedb.php'; ?> can I just include these lines on the top code so that I will not be putting the include 'opendb.php' anymore: $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Hospital", $con);

    Read the article

  • How to reset the context to the original rectangle after clipping it for drawing?

    - by mystify
    I try to draw a sequence of pattern images (different repeated patterns in one view). So what I did is this, in a loop: CGContextRef context = UIGraphicsGetCurrentContext(); // clip to the drawing rectangle to draw the pattern for this portion of the view CGContextClipToRect(context, drawingRect); // the first call here works fine... but for the next nothing will be drawn CGContextDrawTiledImage(context, CGRectMake(0, 0, 2, 31), [img CGImage]); I think that after I've clipped the context to draw the pattern in the specific rectangle, I cut out a snippet from the big canvas and the next time, my canvas is gone. can't cut out another snippet. So I must reset that clipping somehow in order to be able to draw another pattern again somewhere else? Edit: In the documentation I found this: CGContextClip: "... Therefore, to re-enlarge the paintable area by restoring the clipping path to a prior state, you must save the graphics state before you clip and restore the graphics state after you’ve completed any clipped drawing. ..." Well then, how to store the graphics state before clipping and how to restore it?

    Read the article

  • Finding the digit root of a number

    - by Jessica M.
    Study question is to find the digit root of a already provided number. The teacher provides us with the number 2638. In order to find the digit root you have to add each digit separately 2 + 6 + 3 + 8 = 19. Then you take the result 19 and add those two digits together 1 + 9 = 10. Do the same thing again 1 + 0 = 1. The digit root is 1. My first step was to use the variable total to add up the number 2638 to find the total of 19. Then I tried to use the second while loop to separate the two digits by using the % I have to try and solve the problem by using basic integer arithmetic (+, -, *, /). 1.Is it necessary and or possible to solve the problem using nested while loops? 2.Is my math correct? 3. As I wrote it here it does not run in Eclipse. Am I using the while loops correctly? import acm.program.*; public class Ch4Q7 extends ConsoleProgram { public void run(){ println("This program attempts to find the digit root of your number: "); int n = readInt("Please enter your number: "); int total = 0; int root = total; while (n > 0 ){ total = total + (n %10); n = (n / 10); } while ( total > 0 ){ root = total; total = ((total % 10) + total / 10); } println("your root should be " + root); } }

    Read the article

  • Sort queryset by a generic foreign key (django)?

    - by thornomad
    I am using Django's comment framework which utilizes generic foreign keys. Question: How do I sort a given model's queryset by their comment count using the generic foreign key lookup? Reading the django docs on the subject it says one needs to calculate them not using the aggregation API: Django's database aggregation API doesn't work with a GenericRelation. [...] For now, if you need aggregates on generic relations, you'll need to calculate them without using the aggregation API. The only way I can think of, though, would be to iterate through my queryset, generate a list with content_type and object_id's for each item, then run a second queryset on the Comment model filtering by this list of content_type and object_id ... sort the objects by the count, then re-create a new queryset in this order by pulling the content_object for each comment ... This just seems wrong and I'm not even sure how to pull it off. Ideas? Someone must have done this before. I found this post online but it requires me handwriting SQL -- is that really necessary ?

    Read the article

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • escape exactly what in javascript

    - by Emin
    Hi all, Being a newbie in javascript I came to a situation where I need more information on escaping characters in a string. Basically I know that in order to escape " I need to replace it with \" but what I don't know is for which characters I need to escape a particular string for. Is there a list of these "characters to escape"? or is it any character that is not a-zA-Z0-9 ? In my situation, I don't have control over the content that is being displayed on my page. Users enter some text and save it. I then use a webservice to extract them from the database, build a json array of objects, then iterate the array when I need to display them. In this case, I have - naturally - no idea of what the text the user has entered and therefore for what characters I need to escape. I also use jQuery for this specific project (just in case it has a function I am not aware of, to do what I need) Providing examples would be appreciated but I also want to learn the theory and logic behind it. Hope someone can be of any help.

    Read the article

  • supplied argument is not a valid MySQL result

    - by jasmine
    I have written a function: function selectWithPaging($where){ $pg = (int) (!isset($_GET["pg"]) ? 1 : $_GET["pg"]); $pg = ($pg == 0 ? 1 : $pg); $perpage = 10;//limit in each page $startpoint = ($pg * $perpage) - $perpage; $result = mysql_query("SELECT * FROM $where ORDER BY id ASC LIMIT $startpoint,$perpage"); return $result; } but when inserting in this function : function categories() { selectWithPaging('category') $text .='<h2 class="mainH">Categories</h2>'; $text .= '<table><tr class="cn"><td>ID</td><td class="name">Category</td> <td>Durum</td>'; while ($row = mysql_fetch_array($result)) { $home = $row['home']; $publish = $row['published']; $ID = $row['id']; $src = '<img src="'.ADMIN_IMG.'homec.png">'; ------------- } there is this error: supplied argument is not a valid MySQL result What is wrong in my first function? Thanks in advance

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • Client Server communication in Java - which approach to use?

    - by markovuksanovic
    I have a typical client server communication - Client sends data to the server, server processes that, and returns data to the client. The problem is that the process operation can take quite some time - order of magnitude - minutes. There are a few approaches that could be used to solve this. Establish a connection, and keep it alive, until the operation is finished and the client receives the response. Establish connection, send data, close the connection. Now the processing takes place and once it is finished the server could establish a connection to the client to send the data. Establish a connection, send data, close the connection. Processing takes place. client asks server, every n minutes/seconds if the operation is finished. If the processing is finished the client fetches the data. I was wondering which approach would be the best way to use. Is there maybe some "de facto" standard for solving this problem? How "expensive" is opening a socket in Java? Solution 1. seems pretty nasty to me, but 2. and 3. could do. The problem with solution 2. is that the server needs to know on which port the client is listening, while solution 3. adds some network overhead.

    Read the article

  • PHP: Extract direct sub directory from path string

    - by Nebs
    I need to extract the name of the direct sub directory from a full path string. For example, say we have: $str = "dir1/dir2/dir3/dir4/filename.ext"; $dir = "dir1/dir2"; Then the name of the sub-directory in the $str path relative to $dir would be "dir3". Note that $dir never has '/' at the ends. So the function should be: $subdir = getsubdir($str,$dir); echo $subdir; // Outputs "dir3" If $dir="dir1" then the output would be "dir2". If $dir="dir1/dir2/dir3/dir4" then the output would be "" (empty). If $dir="" then the output would be "dir1". Etc.. Currently this is what I have, and it works (as far as I've tested it). I'm just wondering if there's a simpler way since I find I'm using a lot of string functions. Maybe there's some magic regexp to do this in one line? (I'm not too good with regexp unfortunately). function getsubdir($str,$dir) { // Remove the filename $str = dirname($str); // Remove the $dir if(!empty($dir)){ $str = str_replace($dir,"",$str); } // Remove the leading '/' if there is one $si = stripos($str,"/"); if($si == 0){ $str = substr($str,1); } // Remove everything after the subdir (if there is anything) $lastpart = strchr($str,"/"); $str = str_replace($lastpart,"",$str); return $str; } As you can see, it's a little hacky in order to handle some odd cases (no '/' in input, empty input, etc). I hope all that made sense. Any help/suggestions are welcome.

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >