Search Results

Search found 9244 results on 370 pages for 'thinking sphinx'.

Page 302/370 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Button Onclick event (which is in codbehind) doesn't get triggered in MVC 2

    - by rksprst
    I had an MVC 1.0 web application that was in VS 2008; I just upgraded the project to VS 2010 which automatically upgraded MVC to 2.0. I have a bunch of viewpages have codebehind files that were manually added. The project worked fine before the upgrade, but now the onclick even't don't get triggered. I.e. I have an asp:button with an onclick event that points to a method in the codebehind. When you click the button, the onclick event doesn't get triggered. In fact, when you look at the Page variable, IsPostBack is false. This is really bizarre and I'm wondering if anyone know what happened and how to fix it. I'm thinking it has something to do with the changes in MVC 2.0; but I'm not sure. Any help is really appreciated, I've been trying to figure this out for a while. (deleting the codebehinds and moving that to the controller is not really an option since there is so many pages, moving back to vs 2008 is a last resort as I want to make use of some of the VS 2010 features like performance testing.)

    Read the article

  • Php index page utitlizing an idea for a superswitch...

    - by Matt
    I dont know if this is a great idea...or a crap one. But I was thinking I may use one page to display all my pages using includes. Here is what my index.php would look like...on the functions include there is a function called "superSwitch" which will determine what requested page will be included....for instance if I do a get ?a=a it will goto the function superSwitch(a) superSwitch will take it and associate it with (login.php) then respond with such... here is the code for the index.php...please let me know if this makes sense and might work, or should I just stick to long blocks of code (which is why I am trying this because I hate long pages full of code...) of course as you can tell it is not actually including anything yet...the print is for debugging purposes. :) Thanks, Matt <?php //includes Functions include_once('inc/func.inc.php'); //set superget variable $superget = @$_GET['a']; //check if superget is set or null if (!$superget) { echo "Nothing Requested :)"; } else { //sanitizes the superget request $supergetr = supergetSanitize($superget) //uses the result "good" or "nogood" to determine what happens if ( $supergetr == "good" ) { //pulls superSwitch value of the request $ssresult = superSwitch($superget); print_r ($ssresult); } //if the sanitize is nogood else { //the superSwitch is instructed to respond with a 404 page $superget = "404" $ssresult = superSwitch($superget); print_r ($ssresult); } } ?>

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • How to work threading with ConcurrentQueue<T>.

    - by dboarman
    I am trying to figure out what the best way of working with a queue will be. I have a process that returns a DataTable. Each DataTable, in turn, is merged with the previous DataTable. There is one problem, too many records to hold until the final BulkCopy (OutOfMemory). So, I have determined that I should process each incoming DataTable immediately. Thinking about the ConcurrentQueue<T>...but I don't see how the WriteQueuedData() method would know to dequeue a table and write it to the database. For instance: public class TableTransporter { private ConcurrentQueue<DataTable> tableQueue = new ConcurrentQueue<DataTable>(); public TableTransporter() { tableQueue.OnItemQueued += new EventHandler(WriteQueuedData); // no events available } public void ExtractData() { DataTable table; // perform data extraction tableQueue.Enqueue(table); } private void WriteQueuedData(object sender, EventArgs e) { BulkCopy(e.Table); } } My first question is, aside from the fact that I don't actually have any events to subscribe to, if I call ExtractData() asynchronously will this be all that I need? Second, is there something I'm missing about the way ConcurrentQueue<T> functions and needing some form of trigger to work asynchronously with the queued objects?

    Read the article

  • Should you declare methods using overloads or optional parameters in C# 4.0?

    - by Greg Beech
    I was watching Anders' talk about C# 4.0 and sneak preview of C# 5.0, and it got me thinking about when optional parameters are available in C# what is going to be the recommended way to declare methods that do not need all parameters specified? For example something like the FileStream class has about fifteen different constructors which can be divided into logical 'families' e.g. the ones below from a string, the ones from an IntPtr and the ones from a SafeFileHandle. FileStream(string,FileMode); FileStream(string,FileMode,FileAccess); FileStream(string,FileMode,FileAccess,FileShare); FileStream(string,FileMode,FileAccess,FileShare,int); FileStream(string,FileMode,FileAccess,FileShare,int,bool); It seems to me that this type of pattern could be simplified by having three constructors instead, and using optional parameters for the ones that can be defaulted, which would make the different families of constructors more distinct [note: I know this change will not be made in the BCL, I'm talking hypothetically for this type of situation]. What do you think? From C# 4.0 will it make more sense to make closely related groups of constructors and methods a single method with optional parameters, or is there a good reason to stick with the traditional many-overload mechanism?

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • Asterisk: Dropping calls with an "ast_yyerror"

    - by Nick
    I'm having an issue where asterisk will play our greeting to the caller, and then drop the call instead of making our phones ring. The bit of information I could find said it was caused by an error in evaluating a dialplan expression. I'm thinking it's this line: exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) But I'm not sure what's wrong with it. I see the following error on the console: [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input:=TRUE^ Surrounding Console output: -- Executing [START@AGInbound:1] Answer("IAX2/AtlantaTeliax-10086", "") in new stack -- Executing [START@AGInbound:2] BackGround("IAX2/AtlantaTeliax-10086", 0000_AG_THANK_YOU_FOR_CALLING_AG") in new stack -- Playing '0000_AG_THANK_YOU_FOR_CALLING_AG.slin' (language 'en') [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input: =TRUE ^ [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:463 ast_yyerror: If you have questions, please refer to doc/tex/channelvariables.tex in the asterisk source. -- Executing [START@AGInbound:3] GotoIf("IAX2/AtlantaTeliax-10086", "?CLOSED,1") in new stack -- Executing [START@AGInbound:4] GotoIfTime("IAX2/AtlantaTeliax-10086", "9:30-17:0|mon-fri|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:5] GotoIfTime("IAX2/AtlantaTeliax-10086", "10:0-18:30|sat|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:6] GotoIfTime("IAX2/AtlantaTeliax-10086", "12:0-17:0|sun|*|*?OPEN,1") in new stack Relevant lines from the dial plan: exten = START,1,Answer() exten = START,n,Background(0000_AG_THANK_YOU_FOR_CALLING_AG) ; See if we're open ; Force Closed if no one's going to be answering exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) exten = START,n,GotoIfTime(${AG_WEEKDAY_OPEN_HOUR}:${AG_WEEKDAY_OPEN_MIN}-${AG$ exten = START,n,GotoIfTime(${AG_SATURDAY_OPEN_HOUR}:${AG_SATURDAY_OPEN_MIN}-${$ exten = START,n,GotoIfTime(${AG_SUNDAY_OPEN_HOUR}:${AG_SUNDAY_OPEN_MIN}-${AG_S$ ; ...and we're not. But maybe the time of day has been overridden? exten = START,n,GotoIf($[${OVERRIDE_TIME_OF_DAY}=TRUE]?OPEN,1) ; No override... We're definatly closed. exten = START,n,Goto(CLOSED,1)

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • Putting Select Statement on Hibernate Transaction

    - by Mark Estrada
    Hi All, I have been reading the net for a while regarding Hibernate but I can seem to understand one concept regarding Transaction. On some site that I have visit, Select statements are in transaction mode like this. public List<Book> readAll() { Session session = HibernateUtil.getSessionFactory() .getCurrentSession(); session.beginTransaction(); List<Book> booksList = session.createQuery("from Book").list(); session.getTransaction().commit(); return booksList; } While on some site, it does not advocate the use of transaction on Select statements public List<Book> readAll() { Session session = HibernateUtil.getSessionFactory() .getCurrentSession(); List<Book> booksList = session.createQuery("from Book").list(); return booksList; } I am thinking which one should I follow. Any thoughts please? Are transactions needed on Select Statements or not? Thanks

    Read the article

  • Generating dynamic css using php and javascript

    - by Onkar Deshpande
    I want to generate tooltip based on a dynamically changing background image in css. This is my my_css.php file. <?php header('content-type: text/css'); $i = $_GET['index']; if($i == 0) $bg_image_path = "../bg_red.jpg"; elseif ($i == 1) $bg_image_path = "../bg_yellow.jpg"; elseif ($i == 2) $bg_image_path = "../bg_green.jpg"; elseif ($i == 3) $bg_image_path = "../bg_blue.jpg"; ?> .tooltip { white-space: nowrap; color:green; font-weight:bold; border:1px solid black;; font-size:14px; background-color: white; margin: 0; padding: 7px 4px; border: 1px solid black; background-image: url(<?php echo $bg_image_path; ?>); background-repeat:repeat-x; font-family: Helvetica,Arial,Sans-Serif; font-family: Times New Roman,Georgia,Serif; filter:alpha(opacity=85); opacity:0.85; zoom: 1; } In order to use this css I added <link rel="stylesheet" href="css/my_css.php" type="text/css" media="screen" /> in my html <head> tag of javascript code. I am thinking of passing different values of 'index' so that it would generate the background image dynamically. Can anyone tell me how should I pass such values from a javascript ? I am creating the tooltip using var tooltip = document.createElement("div"); document.getElementById("map").appendChild(tooltip); tooltip.style.visibility="hidden"; and I think before calling this createElement, I should set background image.

    Read the article

  • Ranking/ weighing search result

    - by biso
    I am trying to build an application that has a smart adaptive search engine (lets say for cars). If I search for for 4x4 then the DB will return all the 4x4 cars I have (100 cars) - but as time goes by and I start checking out cars, liking them, commenting on them, etc the order of the search result should be the different. That means 1 month later when searching for 4x4, I should get the same result set ordered differently as per my previous interaction with the site. If I was mainly liking and commenting on German cars, BMW should be on the top and Land cruiser should be further down. This ranking should be based on attributes that I captureduring user interaction (eg: car origin, user age, user location, car type[4x4, coupe, hatchback], price range). So for each car result I get, I will be weighing it based on how well it is performing on the 5 attributes above. I intend to use the DB just as a repository and do the ranking and the thinking on the server. My question is, what kind of algorithm should I be using to weigh/rank my search result? Thanks.

    Read the article

  • Helping Rails Newbies identify version-specific information on web pages

    - by corprew
    I am trying to help some people getting started programming on rails identify which version that advice found on web pages corresponds to, and am seeking advice and/or guides on how to do it so they don't have to rely on me and/or waste time trying outdated advice. Narrative: I am helping some people get up to speed on rails development, and their stock response to running into problems is searching google for advice. They're using 2.3.5 and thinking of moving to 3. The problem they're running into is that there's a lot of advice out there specific to older rails versions (2.2 for example being popular) that isn't identified. I can usually figure out when the pages are old pretty easily, but they can't (yet.) It seems like random web page authors don't identify which version they're using when they're using the current version, and not all pages are dated. This seems to be a general problem that will get worse -- current unadorned advice is usually 2.3.5 and older unadorned advice is 2.2.x at this point, but people are moving / will be moving to version 3 over the next while and newbies will be stuck looking at a bunch of deprecated/incompatible 2.3.x advice without realizing which version it is. Any advice / pointers / telltales?

    Read the article

  • Allow paste in worksheet without overwriting locked cells

    - by jjeaton
    I have a protected worksheet that users would like to copy and paste into. I have no control over the workbook they are copying from. The protected worksheet has some rows that are available for data entry, and other rows that are locked and greyed out to the user. The users would like to be able to paste over the top of the entire worksheet from another random workbook and have all the cells available for data entry filled in, while the locked cells are undisturbed. In the current state, the user gets an error when they try to paste, because it cannot paste over the locked cells. Example: Worksheet 1: Act1 100 100 100 Act2 100 100 100 Act3 100 100 100 Worksheet 2: (The second row is locked) Act1 300 300 300 Act2 200 200 200 Act3 100 100 100 After copying/pasting Worksheet 2 should look like this: Act1 100 100 100 Act2 200 200 200 Act3 100 100 100 The values from worksheet 1 are populated and the locked rows are undisturbed. I've been thinking along the lines of having a hook where on paste, the locked cells are unlocked so that the paste can happen, and then are reverted to their original values and relocked. Is there some way I can loop through the cells in the clipboard and only paste cells where the target isn't locked? It is preferable to not create a separate button for paste, so there is less impact on the users, but if that's the only way, I'm not opposed to it. Currently, I plan on grouping the locked rows together, so that the data entry cells are contiguous, but then the accounts will be out of order, which is not preferred.

    Read the article

  • TFS: Choose which Team Project to add a solution too.

    - by Patricker
    I have a solution which I developed in VS2008 and which I am trying to add to Source Control (TFS 2010, though the issue happened in TFS 2008 as well). I have several TFS workspaces on my computer and I have access to several Team Projects. When I right click the solution in my Solution Explorer and choose the "Add Solution to Source Control" option I am never given an option of choosing which Workspace or which Team Project to add the existing solution too. VS2008 then proceeds to add it to the same team project every time. I have tried selecting an alternate workspace/team project in every window where I can see an option for it but it always adds it back to the same one. I even tried changing the name of my new workspace so that alphabetically it was the first thinking that it might be somehow related to that... no luck. I then tried goign to the Change Source Control window where you can add/remove bindings on a solution/project but that window also defaults to the same Team Project as trying to add the solution directly does... Any help would be greatly appreciated with this, maybe I'm just missing something?

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • HTML5 Database Transactions

    - by jiewmeng
    i am wondering abt the example W3C Offline Web Apps the example function renderNotes() { db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { for(var i = 0; i < rs.rows.length; i++) { renderNote(rs.rows[i]); } }); }); } has the create table before the 'main' executeSql(). will it be better if i do something like $(function() { // create table 1st db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); }); // when i execute say to select/modify data, i just do the actual action db.transaction(function(tx) { tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { ... } }); db.transaction(function(tx) { tx.executeSql(‘INSERT ...’, [], function(tx, rs) { ... } }); }) i was thinking i don't need to keep repeating the CREATE IF NOT EXISTS right?

    Read the article

  • Understanding Symbols In Ruby

    - by Kezzer
    Despite reading this article, I'm still confused as to the representation of the data in memory when it comes to using symbols. If a symbol, two of them contained in different objects, exist in the same memory location, then how is it that they contain different values? I'd have expected the same memory location to contain the same value. As a quote from the link: Unlike strings, symbols of the same name are initialized and exist in memory only once during a session of ruby I just don't understand how it manages to differentiate the values contained in the same memory location. EDIT So let's consider the example: patient1 = { :ruby => "red" } patient2 = { :ruby => "programming" } patient1.each_key {|key| puts key.object_id.to_s} 3918094 patient2.each_key {|key| puts key.object_id.to_s} 3918094 patient1 and patient2 are both hashes, that's fine. :ruby however is a symbol. If we were to output the following: patient1.each_key {|key| puts key.to_s} Then what will be output? "red", or "programming"? FURTHER EDIT I'm still really quite confused. I'm thinking a symbol is a pointer to a value. Let's forget hashes for a second. The questions I have are; can you assign a value to a symbol? Is a symbol just a pointer to a variable with a value in it? If symbols are global, does that mean a symbol always points to one thing?

    Read the article

  • Loading Native library to external Package in Eclipse not working. is it a Bug?

    - by TacB0sS
    I was about to report a but to Eclipse, but I was thinking to give this a chance here first: If I add an external package, the application cannot find the referenced native library, except in the case specified at the below: If my workspace consists of a single project, and I import an external package 'EX_package.jar' from a folder outside of the project folder, I can assign a folder to the native library location via: mouse over package - right click - properties - Native Library - Enter your folder. This does not work. In runtime the application does not load the library, System.mapLibraryName(Path) also does not work. Further more, if I create a User Library, and add the package to it and define a folder for the native library it still does not. If it works for you then I have a major bug since it does not work on my computer I test this in any combination I could think of, including adding the path to the windows PATH parameter, and so many other ways I can't even start to remember, nothing worked, I played with this for hours and had a colleague try to assist me, but we both came up empty. Further more, if I have a main project that is dependent on few other projects in my workspace, and they all need to use the same 'EX_package.jar' I MUST supply a HARD COPY INTO EACH OF THEM, it will ONLY (I can't stress the ONLYNESS, I got freaked out by this) work if I have a hard copy of the package in ALL of the project folders that the main project has a dependency on, and ONLY if I configure the Native path in each of them!! This also didn't do the trick. please tell me there is a solution to this, this drives me nuts... Thanks, Adam Zehavi.

    Read the article

  • When does an ARM7 processor increase its PC register?

    - by Summer_More_More_Tea
    Hi everyone: I'm thinking about this question for a time: when does an ARM7(with 3 pipelines) processor increase its PC register. I originally thought that after an instruction has been executed, the processor first check is there any exception in the last execution, then increase PC by 2 or 4 depending on current state. If an exception occur, ARM7 will change its running mode, store PC in the LR of current mode and begin to process current exception without modifying the PC register. But it make no sense when analyzing returning instructions. I can not work out why PC will be assigned LR when returning from an undefined-instruction-exception while LR-4 from prefetch-abort-exception, don't both of these exceptions happened at the decoding state? What's more, according to my textbook, PC will always be assigned LR-4 when returning from prefetch-abort-exception no matter what state the processor is(ARM or Thumb) before exception occurs. However, I think PC should be assigned LR-2 if the original state is Thumb, since a Thumb-instruction is 2 bytes long instead of 4 bytes which an ARM-instruction holds, and we just wanna roll-back an instruction in current state. Is there any flaws in my reasoning or something wrong with the textbook. Seems a long question. I really hope anyone can help me get the right answer. Thanks in advance.

    Read the article

  • What kind of storage with two-way replication for multi site C# application?

    - by twk
    Hi I have a web-based system written using asp.net backed by mssql. A synchronized replica of this system is to be run on mobile locations and must be available regardless of the state of the connection to the main system (few hours long interruptions happens). For now I am using a copy of the main web application and a copy of the mssql server with merge replication to the main system. This works unreliably, and setting the replication is a pain. The amount of data the system contains is not huge, so I can migrate to different storage type. For the new version of this system I would like to implement a new replication system. I am considering migration to db4o for storage with it's replication support. I am thinking about other possible solutions like couchdb which had native replication support. I would like to stay with C#. Could you recommend a way to go for such a distributed environment? PS. Master-Slave replication is not an option: any side must be allowed to add/update data.

    Read the article

  • rest and client rights integration, and backbone.js

    - by Francois
    I started to be more and more interested in the REST architecture style and client side development and I was thinking of using backbone.js on the client and a REST API (using ASP.NET Web API) for a little meeting management application. One of my requirements is that users with admin rights can edit meetings and other user can only see them. I was then wondering how to integrate the current user rights in the response for a given resource? My problem is beyond knowing if a user is authenticated or not, I want to know if I need to render the little 'edit' button next to the meeting (let's say I'm listing the current meetings in a grid) or not. Let's say I'm GETing /api/meetings and this is returning a list of meetings with their respective individual URI. How can I add if the user is able to edit this resource or not? This is an interesting passage from one of Roy's blog posts: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations It states that all transitions must be driven by the choices that are present in the representation. Does that mean that I can add an 'editURI' and a 'deleteURI' to each of the meeting i'm returning? if this information is there I can render the 'edit' button and if it's not there I just don't? What's the best practices on how to integrate the user's rights in the entity's representation? Or is this a super bad idea and another round trip is needed to fetch that information?

    Read the article

  • Does Android AsyncTaskQueue or similar exist?

    - by Ben L.
    I read somewhere (and have observed) that starting threads is slow. I always assumed that AsyncTask created and reused a single thread because it required being started inside the UI thread. The following (anonymized) code is called from a ListAdapter's getView method to load images asynchronously. It works well until the user moves the list quickly, and then it becomes "janky". final File imageFile = new File(getCacheDir().getPath() + "/img/" + p.image); image.setVisibility(View.GONE); view.findViewById(R.id.imageLoading).setVisibility(View.VISIBLE); (new AsyncTask<Void, Void, Bitmap>() { @Override protected Bitmap doInBackground(Void... params) { try { Bitmap image; if (!imageFile.exists() || imageFile.length() == 0) { image = BitmapFactory.decodeStream(new URL( "http://example.com/images/" + p.image).openStream()); image.compress(Bitmap.CompressFormat.JPEG, 85, new FileOutputStream(imageFile)); image.recycle(); } image = BitmapFactory.decodeFile(imageFile.getPath(), bitmapOptions); return image; } catch (MalformedURLException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } catch (IOException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } } @Override protected void onPostExecute(Bitmap image) { if (view.getTag() != p) // The view was recycled. return; view.findViewById(R.id.imageLoading).setVisibility( View.GONE); view.findViewById(R.id.image) .setVisibility(View.VISIBLE); ((ImageView) view.findViewById(R.id.image)) .setImageBitmap(image); } }).execute(); I'm thinking that a queue-based method would work better, but I'm wondering if there is one or if I should attempt to create my own implementation.

    Read the article

  • How to make active services highly available?

    - by Jader Dias
    I know that with Network Load Balancing and Failover Clusteringwe can make passive services highly available. But what about active apps? Example: One of my apps retrieves some content from a external resource in a fixed interval. I have imagined the following scenarios: Run it in a single machine. Problem: if this instance falls, the content won't be retrieved Run it in each machine of the cluster. Problem: the content will be retrieved multiple times Have it in each machine of the cluster, but run it only in one of them. Each instance will have to check some sort of common resource to decide whether it its turn to do the task or not. When I was thinking about the solution #3 I have wondered what should be the common resource. I have thought of creating a table in the database, where we could use it to get a global lock. Is this the best solution? How does people usually do this? By the way it's a C# .NET WCF app running on Windows Server 2008

    Read the article

  • Draw a position from a 2d Array on respected canvas location

    - by Anon
    Background: I have two 2d arrays. Each index within each 2d array represents a tile which is drawn on a square canvas suitable for 8 x 8 tiles. The first 2d array represents the ground tiles and is looped and drawn on the canvas using the following code: //Draw the map from the land 2d array map = new Canvas(mainFrame, 20, 260, 281, 281); for(int i=0; i < world.length; i++){ for(int j=0; j < world[i].length; j++){ for(int x=0; x < 280; x=x+35){ for(int y=0; y < 280; y=y+35){ Point p = new Point(x,y); map.add(new RectangleObject(p,35,35,Colour.green)); } } } } This creates a grid of green tiles 8 x 8 across as intended. The second 2d array represents the position on the ground. This 2d array has everyone of its indexes as null apart from one which is comprised of a Person class. Problem I am unsure of how I can draw the position on the grid. I was thinking of a similar loop, so it draws over the previous 2d array another set of 64 tiles. Only this time they are all transparent but the one tile which isn't null. In other words, the tile where Person is located. I wanted to use a search throughout the loop using a comparative if statement along the lines of if(!(world[] == null)){ map.add(new RectangleObject(p,35,35,Colour.red));} However my knowledge is limited and I am confused on how to implement it.

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >