Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 867/945 | < Previous Page | 863 864 865 866 867 868 869 870 871 872 873 874  | Next Page >

  • How can I make keyword order more relevant in my search?

    - by Atomiton
    In my database, I have a keywords field that stores a comma-delimited list of keywords. For example, a Shrek doll might have the following keywords: ogre, green, plush, hero, boys' toys A "Beanie Baby" doll ( that happens to be an ogre ) might have: beanie baby, kids toys, beanbag toys, soft, infant, ogre (That's a completely contrived example.) What I'd like to do is if the consumer searches for "ogre" I'd like the "Shrek" doll to come up higher in the search results. My content administrator feels that if the keyword is earlier in the list, it should get a higher ranking. ( This makes sense to me and it makes it easy for me to let them control the search result relevance ). Here's a simplified query: SELECT p.ProductID AS ContentID , p.ProductName AS Title , p.ProductCode AS Subtitle , 100 AS Rank , p.ProductKeywords AS Keywords FROM Products AS p WHERE FREETEXT( p.ProductKeywords, @SearchPredicate ) I'm thinking something along the lines of replacing the RANK with: , 200 - INDEXOF(@SearchTerm) AS Rank This "should" rank the keyword results by their relevance I know INDEXOF isn't a SQL command... but it's something LIKE that I would like to accomplish. Am I approaching this the right way? Is it possible to do something like this? Does this make sense?

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • VB .NET error handling, pass error to caller

    - by user1375452
    this is my very first project on vb.net and i am now struggling to migrate a vba working add in to a vb.net COM Add-in. I think i'm sort of getting the hang, but error handling has me stymied. This is a test i've been using to understand the try-catch and how to pass exception to caller Public Sub test() Dim ActWkSh As Excel.Worksheet Dim ActRng As Excel.Range Dim ActCll As Excel.Range Dim sVar01 As String Dim iVar01 As Integer Dim sVar02 As String Dim iVar02 As Integer Dim objVar01 As Object ActWkSh = Me.Application.ActiveSheet ActRng = Me.Application.Selection ActCll = Me.Application.ActiveCell iVar01 = iVar02 = 1 sVar01 = CStr(ActCll.Value) sVar02 = CStr(ActCll.Offset(1, 0).Value) Try objVar01 = GetValuesV(sVar01, sVar02) 'DO SOMETHING HERE Catch ex As Exception MsgBox("ERRORE: " + ex.Message) 'LOG ERROR SOMEWHERE Finally MsgBox("DONE!") End Try End Sub Private Function GetValuesV(ByVal QryStr As Object, ByVal qryConn As String) As Object Dim cnn As Object Dim rs As Object Try cnn = CreateObject("ADODB.Connection") cnn.Open(qryConn) rs = CreateObject("ADODB.recordset") rs = cnn.Execute(QryStr) If rs.EOF = False Then GetValuesV = rs.GetRows Else Throw New System.Exception("Query Return Empty Set") End If Catch ex As Exception Throw ex Finally rs.Close() cnn.Close() End Try End Function i'd like to have the error message up to test, but MsgBox("ERRORE: " + ex.Message) pops out something unexpected (Object variable or With block variable not set) What am i doing wrong here?? Thanks D

    Read the article

  • Understanding C++ dynamic allocation

    - by kiokko89
    Consider the following code: class CString { private: char* buff; size_t len; public: CString(const char* p):len(0), buff(nullptr) { cout << "Constructor called!"<<endl; if (p!=nullptr) { len= strlen(p); if (len>0) { buff= new char[len+1]; strcpy_s(buff, len+1, p); } } } CString (const CString& s) { cout << "Copy constructor called!"<<endl; len= s.len; buff= new char[len+1]; strcpy_s(buff, len+1, s.buff); } CString& operator = (const CString& rhs) { cout << "Assignment operator called!"<<endl; if (this != &rhs) { len= rhs.len; delete[] buff; buff= new char[len+1]; strcpy_s(buff, len+1, rhs.buff); } return *this; } CString operator + (const CString& rhs) const { cout << "Addition operator called!"<<endl; size_t lenght= len+rhs.len+1; char* tmp = new char[lenght]; strcpy_s(tmp, lenght, buff); strcat_s(tmp, lenght, rhs.buff); return CString(tmp); } ~CString() { cout << "Destructor called!"<<endl; delete[] buff; } }; int main() { CString s1("Hello"); CString s2("World"); CString s3 = s1+s2; } My problem is that I don't know how to delete the memory allocated in the addition operator function(char* tmp = new char[length]). I couldn't do this in the constructor(I tried delete[] p) because it is also called from the main function with arrays of chars as parameters which are not allocated on the heap...How can I get around this? (Sorry for my bad English...)

    Read the article

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • How to efficently build an interpreter (lexer+parser) in C?

    - by Rizo
    I'm trying to make a meta-language for writing markup code (such as xml and html) wich can be directly embedded into C/C++ code. Here is a simple sample written in this language, I call it WDI (Web Development Interface): /* * Simple wdi/html sample source code */ #include <mySite> string name = "myName"; string toCapital(string str); html { head { title { mySiteTitle; } link(rel="stylesheet", href="style.css"); } body(id="default") { // Page content wrapper div(id="wrapper", class="some_class") { h1 { "Hello, " + toCapital(name) + "!"; } // Lists post ul(id="post_list") { for(post in posts) { li { a(href=post.getID()) { post.tilte; } } } } } } } Basically it is a C source with a user-friendly interface for html. As you can see the traditional tag-based style is substituted by C-like, with blocks delimited by curly braces. I need to build an interpreter to translate this code to html and posteriorly insert it into C, so that it can be compiled. The C part stays intact. Inside the wdi source it is not necessary to use prints, every return statement will be used for output (in printf function). The program's output will be clean html code. So, for example a heading 1 tag would be transformed like this: h1 { "Hello, " + toCapital(name) + "!"; } // would become: printf("<h1>Hello, %s!</h1>", toCapital(name)); My main goal is to create an interpreter to translate wdi source to html like this: tag(attributes) {content} = <tag attributes>content</tag> Secondly, html code returned by the interpreter has to be inserted into C code with printfs. Variables and functions that occur inside wdi should also be sorted in order to use them as printf parameters (the case of toCapital(name) in sample source). I am searching for efficient (I want to create a fast parser) way to create a lexer and parser for wdi. Already tried flex and bison, but as I am not sure if they are the best tools. Are there any good alternatives? What is the best way to create such an interpreter? Can you advise some brief literature on this issue?

    Read the article

  • PHP Function needed for GENERIC sorting of a recordset array

    - by donbriggs
    Somebody must have come up with a solution for this by now. I wrote a PHP class to display a recordset as an HTML table/datagrid, and I wish to expand it so that we can sort the datagrid by whichever column the user selects. In the below example data, we may need to sort the recordset array by Name, Shirt, Assign, or Age fields. I will take care of the display part, I just need help with sorting the data array. As usual, I query a database to get a result, iterate throught he result, and put the records into an assciateiave array. So, we end up with an array of arrays. (See below.) I need to be able to sort by any column in the dataset. However, I will not know the column names at design time, nor will I know if the colums will be string or numeric values. I have seen a ton of solutions to this, but I have not seen a GOOD and GENERIC solution Can somebody please suggest a way that I can sort the recordset array that is GENERIC, and will work on any recordset? Again, I will not know the fields names or datatypes at design time. The array presented below is ONLY an example. Array ( [0] = Array ( [name] = Kirk [shrit] = Gold [assign] = Bridge ) [1] => Array ( [name] => Spock [shrit] => Blue [assign] => Bridge ) [2] => Array ( [name] => Uhura [shrit] => Red [assign] => Bridge ) [3] => Array ( [name] => Scotty [shrit] => Red [assign] => Engineering ) [4] => Array ( [name] => McCoy [shrit] => Blue [assign] => Sick Bay ) )

    Read the article

  • How to link a table to a field a in MySQL server

    - by Nek
    I have this data from a xml file: <?xml version="1.0" encoding="utf-8" ?> <words> <id>...</id> <word>...</word> <meaning>...</meaning> <translation> <ES>...</ES> <PT>...</PT> </translation> </words> This forms the table named "words", which has four fields ("id","word","meaning" and "translation"). On the other hand, the "translation" field can hold several languages like ES,PT,EN,JA,KO,etc... So I create a table ("words.translation", one field is "id" and the others ones are languages ids like "ES","PT",...). I'm sorry for this newby question, but I'd like to know a couple of things about this one-to-many relationship. How to join (or link?) this two tables in MySQL? What information does the "translation" field in the "words" table has to store? How is the sql query to get all the word information (JOIN syntax used?) Thanks for your patience.

    Read the article

  • Entity framework generates values for NOT NULL columns which has default defined in db.

    - by Muhammad Kashif Nadeem
    Hi I have a table Customer. One of the columns in table is DateCreated. This column is NOT NULL but default values is defined for this column in db. When I add new Customer using EF4 from my code. var customer = new Customer(); customer.CustomerName = "Hello"; customer.Email = "[email protected]"; // Watch out commented out. //customer.DateCreated = DateTime.Now; context.AddToCustomers(customer); context.SaveChanges(); Above code generates following query. exec sp_executesql N'insert [dbo].[Customers]([CustomerName], [Email], [Phone], [DateCreated], [DateUpdated]) values (@0, @1, null, @2, null) select [CustomerId] from [dbo].[Customers] where @@ROWCOUNT > 0 and [CustomerId] = scope_identity() ',N'@0 varchar(100),@1 varchar(100),@2 datetime2(7) ',@0='Hello',@1='[email protected]',@2='0001-01-01 00:00:00' And throws following error The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value. The statement has been terminated. Can you please tell me how NOT NULL columns which has default values at db level should not have values generated by EF? DB: DateCreated DATETIME NOT NULL DateCreated Properties in EF: Nullable: False Getter/Setter: public Type: DateTime DefaultValue: None Thanks.

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

  • Need a workaround to filter on related model and aggregated fields in Django

    - by parxier
    I opened a ticket for this problem. In a nutshell here is my model: class Plan(models.Model): cap = models.IntegerField() class Phone(models.Model): plan = models.ForeignKey(Plan, related_name='phones') class Call(models.Model): phone = models.ForeignKey(Phone, related_name='calls') cost = models.IntegerField() I want to run a query like this one: Phone.objects.annotate(total_cost=Sum('calls__cost')).filter(total_cost__gte=0.5*F('plan__cap')) Unfortunately Django generates bad SQL: SELECT "app_phone"."id", "app_phone"."plan_id", SUM("app_call"."cost") AS "total_cost" FROM "app_phone" INNER JOIN "app_plan" ON ("app_phone"."plan_id" = "app_plan"."id") LEFT OUTER JOIN "app_call" ON ("app_phone"."id" = "app_call"."phone_id") GROUP BY "app_phone"."id", "app_phone"."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan"."cap" and errors with: ProgrammingError: column "app_plan.cap" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: ...."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan".... Is there any workaround apart from running raw SQL?

    Read the article

  • PHP: Join two separate mysql queries into the same json data object

    - by Dan
    I'm trying to mesh the below mysql query results into a single json object, but not quite sure how to do it properly. //return data $sql_result = mysql_query($sql,$connection) or die ("Fail."); $arr = array(); while($obj = mysql_fetch_object($sql_result)) { $arr[] = $obj; } echo json_encode($arr); //return json //plus the selected options $sql_result2 = mysql_query($sql2,$connection) or die ("Fail."); $arr2 = array(); while($obj2 = mysql_fetch_object($sql_result2)) { $arr2[] = $obj2; } echo json_encode($arr2); //return json Here's the current result: [{"po_number":"test","start_date":"1261116000","end_date":"1262239200","description":"test","taa_required":"0","account_overdue":"1","jobs_id":null,"job_number":null,"companies_id":"4","companies_name":"Primacore Inc."}][{"types_id":"37"},{"types_id":"4"}] Notice how the last section [{"types_id":"37"},{"types_id":"4"}] is placed into a separate chunk under root. I'm wanting it to be nested inside the first branch under a name like, "types". I think my question has more to do with Php array manipulation, but I'm not the best with that. Thank you for any guidance.

    Read the article

  • Intent provided by Cursor is not fired correctly (LiveFolders)

    - by Felix
    In my desperation with trying to get LiveFolders working, I have tried the following in my LiveFolder ContentProvider: public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { MatrixCursor mc = new MatrixCursor(new String[] { LiveFolders._ID, LiveFolders.NAME, LiveFolders.INTENT } ); Intent i = null; for (int j=0; j < 5; j++) { i = new Intent(Intent.ACTION_VIEW, Uri.parse("http://www.google.com/")); mc.addRow(new Object[] { j, "hello", i} ); } return mc; } Which, in all normalness, should launch the Browser and display the Google homepage when clicking on an item in the LiveFolder. But it doesn't. It gives a Application is not installed on your phone error. No, I'm not defining a base intent for my LiveFolder. logcat says: I/ActivityManager( 74): Starting activity: Intent { act=android.intent.action.VIEW dat=Intent { act=android.intent.action.VIEW dat=http://www.google.com/ } flg=0x10000000 } It seems it embeds the Intent I give it in the data section of the actually fired Intent. Why is it doing this? I'm really starting to believe it's a platform bug.

    Read the article

  • Checking if a console application is still running using the Process class

    - by Ced
    I'm making an application that will monitor the state of another process and restart it when it stops responding, exits, or throws an error. However, I'm having trouble to make it reliably check if the process (Being a C++ Console window) has stopped responding. My code looks like this: public void monitorserver() { while (true) { server.StartInfo = new ProcessStartInfo(textbox_srcdsexe.Text, startstring); server.Start(); log("server started"); log("Monitor started."); while (server.Responding) { if (server.HasExited) { log("server exitted, Restarting."); break; } log("server is running: " + server.Responding.ToString()); Thread.Sleep(1000); } log("Server stopped responding, terminating.."); try { server.Kill(); } catch (Exception) { } } } The application I'm monitoring is Valve's Source Dedicated Server, running Garry's Mod, and I'm over stressing the physics engine to simulate it stopping responding. However, this never triggers the process class recognizing it as 'stopped responding'. I know there are ways to directly query the source server using their own protocol, but i'd like to keep it simple and universal (So that i can maybe use it for different applications in the future). Any help appreciated

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • How to make Entity Key Mapping in Entity Framework like sql's foreign key?

    - by programmerist
    I try to give entity map on my entity app. But how can I do it? I try to make it like below: var test = ( from k in Kartlar where k.Rehber..... above codes k.(can not see Rehber or not working ) if you are correct , i can write k.Rehber.ID and others. i can not write: from k in Kartlar where k.Rehber.ID = 123 //assuming that navigation property name is Rehbar and its primary key of Rehbar table is ID && k.Kampanya.ID = 345 //assuming that navigation property name is Kampanya and its primary //key of Kampanya table is ID && k.Birim.ID = 567 //assuming that navigation property name is Birim and its primary key of Birim table is ID select k images you can see: also: You should look : http://i42.tinypic.com/2nqyyc6.png I have a table it includes 3 foreign key field like that: My Table: Kartlar ID (Pkey) RehberID (Fkey) KampanyaID (Fkey) BrimID (Fkey) Name Detail How can i write entity query with LINQ ? select * from Kartlar where RehberID=123 and KampanyaID=345 and BrimID=567 BUT please be careful I can not see RehberID, KampanyaID, BrimID in entity they are foreign key. I should use entity key but how?

    Read the article

  • Hibernate 3.5.0 causes extreme performance problems

    - by user303396
    I've recently updated from hibernate 3.3.1.GA to hibernate 3.5.0 and I'm having a lot of performance issues. As a test, I added around 8000 entities to my DB (which in turn cause other entities to be saved). These entities are saved in batches of 20 so that the transactions aren't too large for performance reasons. When using hibernate 3.3.1.GA all 8000 entities get saved in about 3 minutes. When using hibernate 3.5.0 it starts out slower than with hibernate 3.3.1. But it gets slower and slower. At around 4,000 entities, it sometimes takes 5 minutes just to save a batch of 20. If I then go to a mysql console and manually type in an insert statement from the mysql general query log, half of them run perfect in 0.00 seconds. And half of them take a long time (maybe 40 seconds) or timeout with "ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction" from MySQL. Has something changed in hibernate's transaction management in version 3.5.0 that I should be aware of? The ONLY thing I changed to experience these unusable performance issues is replace the following hibernate 3.3.1.GA jar files: com.springsource.org.hibernate-3.3.1.GA.jar, com.springsource.org.hibernate.annotations-3.4.0.GA.jar, com.springsource.org.hibernate.annotations.common-3.3.0.ga.jar, com.springsource.javassist-3.3.0.ga.jar with the new hibernate 3.5.0 release hibernate3.jar and javassist-3.9.0.GA.jar. Thanks.

    Read the article

  • Extracting noun+noun or (adj|noun)+noun from Text

    - by ssuhan
    I would like to query if it is possible to extract noun+noun or (adj|noun)+noun in R package openNLP?That is, I would like to use linguistic filtering to extract candidate noun phrases. Could you direct me how to do? Many thanks. Thanks for the responses. here is the code: library("openNLP") acq <- "Gulf Applied Technologies Inc said it sold its subsidiaries engaged in pipeline and terminal operations for 12.2 mln dlrs. The company said the sale is subject to certain post closing adjustments, which it did not explain. Reuter." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ") acqTagSplit qq = 0 tag = 0 for (i in 1:length(acqTagSplit[[1]])){ qq[i] <-strsplit(acqTagSplit[[1]][i],'/') tag[i] = qq[i][[1]][2] } index = 0 k = 0 for (i in 1:(length(acqTagSplit[[1]])-1)) { if ((tag[i] == "NN" && tag[i+1] == "NN") | (tag[i] == "NNS" && tag[i+1] == "NNS") | (tag[i] == "NNS" && tag[i+1] == "NN") | (tag[i] == "NN" && tag[i+1] == "NNS") | (tag[i] == "JJ" && tag[i+1] == "NN") | (tag[i] == "JJ" && tag[i+1] == "NNS")){ k = k +1 index[k] = i } } index Reader can refer index on acqTagSplit to do noun+noun or (adj|noun)+noun extractation. (The code is not optimum but work. If you have any idea, please let me know.) Furthermore, I still have a problem. Justeson and Katz (1995) proposed another linguistic filtering to extract candidate noun phrases: ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun I cannot well understand its meaning, could someone do me a favor to explain it or transform such representation into R language

    Read the article

  • how i can retrive files from folder on hard-disk and how to display uplaoded file data into a textar

    - by Deepak Narwal
    I have made a application form in which i am asking for username,password,email id and user's resume.Now after uploading resume i am storing it into hard disk into htdocs/uploadedfiles/..in a format something like this username_filename.In database i am storing file name,file size,file type.Some coading for this i am showing here $filesize=$_FILES['file']['size']; $filename=$_FILES['file']['name']; $filetype=$_FILES['file']['type']; $temp_name=$_FILES['file']['tmp_name']; //temporary name of uploaded file $pwd_hash = hash('sha1',$_POST['password']); $target_path = "uploadedfiles/"; $target_path = $target_path.$_POST['username']."_".basename( $_FILES['file']['name']); move_uploaded_file($_FILES['file']['tmp_name'], $target_path) ; $sql="insert into employee values ('NULL','{$_POST[username]}','{$pwd_hash}','{$filename}','{$filetype}','$filesize',NOW())"; Now i have two questions 1.NOw how i can display this file data into a textarea(something like naukri.com resume section) 2.How one can retrive that resume file from folder on hard-disk.What query should i write to fetch this file from that folder.I know how to retrive data from database but i dont know how to retrive data from a folder in hard-disk like in the case if user want to delete this file or he wnat to download this file.How i can do this

    Read the article

  • Why does firefox round-trip to the server to determine whether my files are modifed?

    - by erikkallen
    I have some static content on my web site that I have set up caching for (using Asp.NET MVC). According to Firebug, the first time I open the page, Firefox sends this request: GET /CoreContent/Core.css?asm=0.7.3614.34951 Host: 127.0.0.1:3916 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729) Accept: text/css,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://127.0.0.1:3916/Edit/1/101 Cookie: .ASPXAUTH=52312E5A802C1A079E2BA29AA2BFBC5A38058977B84452D62ED52855D4164659B4307661EC73A307BFFB2ED3871C67CB3A9AAFDB3A75A99AC0A21C63A6AADE9A11A7138C672E75125D9FF3EFFBD9BF62 Pragma: no-cache Cache-Control: no-cache Which my server replies to with this: Server: ASP.NET Development Server/9.0.0.0 Date: Mon, 23 Nov 2009 18:44:41 GMT X-AspNet-Version: 2.0.50727 X-AspNetMvc-Version: 1.0 Cache-Control: public, max-age=31535671 Expires: Tue, 23 Nov 2010 18:39:12 GMT Last-Modified: Mon, 23 Nov 2009 18:39:12 GMT Vary: * Content-Type: text/css Content-Length: 15006 Connection: Close So far, so good. However, if I refresh Firefox (not a cache-clearing refresh, just a normal one), during that refresh cycle Firefox will once again go to the server with this request: GET /CoreContent/Core.css?asm=0.7.3614.34951 Host: 127.0.0.1:3916 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729) Accept: text/css,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://127.0.0.1:3916/Edit/1/101 Cookie: .ASPXAUTH=52312E5A802C1A079E2BA29AA2BFBC5A38058977B84452D62ED52855D4164659B4307661EC73A307BFFB2ED3871C67CB3A9AAFDB3A75A99AC0A21C63A6AADE9A11A7138C672E75125D9FF3EFFBD9BF62 If-Modified-Since: Mon, 23 Nov 2009 18:39:20 GMT Cache-Control: max-age=0 to which my server responds 304 Not Modified. Why does Firefox issue this second request? In the first response, I said that the cache does not expire for a year (I intend to use query parameters whenever things change). Do I have to add another response header to prevent this extra roundtrip? Edit: It does not matter whether I press refresh, or whether I go to the page again (or a different URL, which references the same external files). Firefox does the same again. Also, I don't claim this to be a bug in FF, I just wonder if there is another header I can set which means "This document will never change, don't bother me again".

    Read the article

  • Checking multiple conditions in Ruby (within Rails, which may not matter)

    - by Ev
    Hello rubyists and railers, I have a method which checks over a params hash to make sure that it contains certain keys, and to make sure that certain values are set within a certain range. This is for an action that responds to a POST query by an iPhone app. Anyway, this method is checking for about 10 different conditions - any of which will result in an HTTP error being returned (I'm still considering this, but possibly a 400: bad request error). My current syntax is basically this (paraphrased): def invalid_submission_params?(params) [check one] or [check two] or [check three] or [check four] etc etc end Where each of the check statements returns true if that particular parameter check results in an invalid parameter set. I call it as a before filter with params[:submission] as the argument. This seems a little ugly (all the strung together or statements). Is there a better way? I have tried using case but can't see a way to make it more elegant. Or, perhaps, is there a rails method that lets me check the incoming params hash for certain conditions before handing control off to my action method?

    Read the article

  • IE "Microsoft JScript runtime error: Object expected"

    - by Stephen Borg
    Hi there, I have problems with regards to javascript only when using IE. The error I am getting is "Microsoft JScript runtime error: Object expected" and I have no idea why. It is then jumping into the JQuery 1.4.2 file, without giving me a proper error message. All I am doing is simply reading on page load the raw URL, and getting a query string named Search. Using that in an AJAX call to return products and put then into a DIV. No biggies, but somehow IE is managing to blow my page up :-( Any ideas? Code as follows : <script type="text/javascript"> $(document).ready(function (e) { $('.boxLoader').show(); function getParameterByName(name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.href); if (results == null) return ""; else return decodeURIComponent(results[1].replace(/\+/g, " ")); } var Search; Search = getParameterByName("search"); $('#searchCriteria').text(Search); $.get("/Handlers/processProducts.aspx", { SearchCriteria: Search }, function (data) { $('#innercontent').html(data); $('#innercontent').fadeIn(200); $('.boxLoader').fadeOut(200); }); $('#searchBox').live("click", function () { $.get("/Handlers/processProducts.aspx", { SearchCriteria: $('#searchCriteria').val() }, function (data) { $('#innercontent').html(data); $('#innercontent').fadeIn(200); $('.boxLoader').fadeOut(200); }); }); }); </script>

    Read the article

  • Javascript CS-PRNG - 64-bit random

    - by Jack
    Hi, I need to generate a cryptographically secure 64-bit unsigned random integer in Javascript. The first problem is that Javascript only allows 64-bit signed integers, so 9223372036854775808 is the biggest supported integer without going into floating point use I think? To fix this I can use a big number library, no problem. My Method: var randNum = SHA256( randBigInt(128, 0) ) % 2^64; Where SHA256() is a secure hash function and randBigInt() is defined below as a non-crypto PRNG, im giving it a 128bit seed so brute force shouldn't be a problem. randBigInt(n,s) //return an n-bit random BigInt (n>=1). If s=1, then the most significant of those n bits is set to 1. Is this a secure method to generate a cryptographically secure 64-bit random int? And importantly does taking the 2^64 mod guarantee 100% I have a 64-bit number? An abstract example, say this number is prime (it isn't i know), I will use it in the Galois Field [2^p], where p must be 64bits so that every possible 1-63bit number is a field element. In this query, my random int must be larger than any 63-bit number. And Im not sure im correct in taking the 2^64 mod of a 256bit hash output. Thanks (hope that makes sense)

    Read the article

  • Using HABTM relationships in cakephp plugins with unique set to false

    - by Dean
    I am working on a plugin for our CakePHP CMS that will handle blogs. When getting to the tags I needed to set the HABTM relationship to unique = false to be able add tags to a post without having to reset them all. The BlogPost model looks like this class BlogPost extends AppModel { var $name = 'BlogPost'; var $actsAs = array('Core.WhoDidIt', 'Containable'); var $hasMany = array('Blog.BlogPostComment'); var $hasAndBelongsToMany = array('Blog.BlogTag' => array('unique' => false), 'Blog.BlogCategory'); } The BlogTag model looks like this class BlogTag extends AppModel { var $name = 'BlogTag'; var $actsAs = array('Containable'); var $hasAndBelongsToMany = array('Blog.BlogPost'); } The SQL error I am getting when I have the unique = true setting in the HABTM relationship between the BlogPost and BlogTag is Query: SELECT `Blog`.`BlogTag`.`id`, `Blog`.`BlogTag`.`name`, `Blog`.`BlogTag`.`slug`, `Blog`.`BlogTag`.`created_by`, `Blog`.`BlogTag`.`modified_by`, `Blog`.`BlogTag`.`created`, `Blog`.`BlogTag`.`modified`, `BlogPostsBlogTag`.`blog_post_id`, `BlogPostsBlogTag`.`blog_tag_id` FROM `blog_tags` AS `Blog`.`BlogTag` JOIN `blog_posts_blog_tags` AS `BlogPostsBlogTag` ON (`BlogPostsBlogTag`.`blog_post_id` = 4 AND `BlogPostsBlogTag`.`blog_tag_id` = `Blog`.`BlogTag`.`id`) As you can see it is trying to set the blog_tags table to 'Blog'.'BlogTag. which isn't a valid MySQL name. When I remove the unique = true from the relationship it all works find and I can save one tag but when adding another it just erases the first one and puts the new one in its place. Does anyone have any ideas? is it a bug or am I just missing something? Cheers, Dean

    Read the article

  • URL development and mod_rewrite

    - by iRector
    My site is made-up of the main page, and multiple sub-directories, all under the same domain. My URLS are currently like .................| Ideal clean version: mysite.com mysite.com/?content=content1 ......................| mysite.com/content1/ mysite.com/?content=content2&page=4 ........| mysite.com/content2/4/ mysite.com/?content=content3 ......................| mysite.com/content3/ mysite.com/?content=content4 ......................| mysite.com/content4/ mysite.com/?content=article&id=34 ............| mysite.com/article/34/ Then the sub-directories are essentially the same: mysite.com/subdir, mysite.com/subdir2, mysite.com/subdir3, etc mysite.com/subdir/?content=content1 ...................| mysite.com/subdir/content1/ mysite.com/subdir/?content=content2&page=4 .....| mysite.com/subdir/content2/4/ mysite.com/subdir/?content=content3 ...................| mysite.com/subdir/content3/ mysite.com/subdir/?content=content4 ...................| mysite.com/subdir/content4/ mysite.com/subdir/?content=article&id=34 .........| mysite.com/subdir/article/34/ I've used mod_rewrite briefly, but I'm not sure how to approach these multiple variables. Also, how would I differentiate between the actually subfolders, and the content variable. As so to prevent 'subdir' or 'subdir2' from being plugged in as the content variable for the root site. I've played around with plenty of code snippets, but I've wiped my .htaccess slate clean, and approach you all in an attempt to help me repopulate it. Your input would thoroughly be appreciated. Note: The only time the page query string will be needed is when 'content' == 'content2' ?content=content2&page=4 **Same rule is shared by the article/id relationship, all other 'content' values are expected to be dynamic.

    Read the article

< Previous Page | 863 864 865 866 867 868 869 870 871 872 873 874  | Next Page >