Search Results

Search found 37260 results on 1491 pages for 'command query responsibil'.

Page 566/1491 | < Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >

  • Installing Rails on Mountain Lion

    - by Jordan Medlock
    I was wondering if you could help me find why I cannot install Ruby on Rails on my MBP with OS X Mountain Lion. It's a weird problem and I'll give you as much info as I can. I've installed ruby and it's working at version 1.9.3 And I've installed ruby gems and it's worked for every other gem I've tried to install. It's version is 1.8.24 When I run $ sudo gem install rails it replies with the message: Successfully installed rails-3.2.8 1 gem installed Although when I ask it rails -v it returns: `Rails is not currently installed on this system. To get the latest version, simply type: $ sudo gem install rails You can then rerun your "rails" command.` What should I do? The rails bash file (/usr/bin/rails) contains: #!/usr/bin/ruby # Stub rails command to load rails from Gems or print an error if not installed. require 'rubygems' version = ">= 0" if ARGV.first =~ /^_(.*)_$/ and Gem::Version.correct? $1 then version = $1 ARGV.shift end begin gem 'railties', version or raise rescue Exception puts 'Rails is not currently installed on this system. To get the latest version, simply type:' puts puts ' $ sudo gem install rails' puts puts 'You can then rerun your "rails" command.' exit 0 end load Gem.bin_path('railties', 'rails', version) That must mean that the gem files aren't there or are old or corrupted How can I check that?

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Creating a function in Postgresql that does not return composite values

    - by celenius
    I'm learning how to write functions in Postgresql. I've defined a function called _tmp_myfunction() which takes in an id and returns a table (I also define a table object type called _tmp_mytable) -- create object type to be returned CREATE TYPE _tmp_mytable AS ( id integer, cost double precision ); -- create function which returns query CREATE OR REPLACE FUNCTION _tmp_myfunction( id integer ) RETURNS SETOF _tmp_mytable AS $$ BEGIN RETURN QUERY SELECT id, cost FROM sales WHERE id = sales.id; END; $$ LANGUAGE plpgsql; This works fine when I use one id and call it using the following approach: SELECT * FROM _tmp_myfunction(402); What I would like to be able to do is to call it, but to use a column of values instead of just one value. However, if I use the following approach I end up with all values of the table in one column, separated by commas: -- call function using all values in a column SELECT _tmp_myfunction(t.id) FROM transactions as t; I understand that I can get the same result if I use SELECT _tmp_myfunction(402); instead of SELECT * FROM _tmp_myfunction(402); but I don't know how to construct my query in such a way that I can separate out the results.

    Read the article

  • Visual Studio 2008 Unit test does not pick up code changes unless I build the entire solution

    - by Orion Edwards
    Here's the scenario: Change my code: Change my unit test for that code With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The visual studio "Output" window indicates that the code dll and the test dll both successfully build (in that order) The problem is however, that the unit test does not use the latest version of the dll which it has just built. Instead, it uses the previously built dll (which doesn't have the updated code in it), so the test fails. When adding a new method, this results in a MethodNotImplementedException, and when adding a class, it results in a TypeLoadException, both because the unit test thinks the new code is there, and it isn't!. If I'm just updating an existing method, then the test just fails due to incorrect results. I can 'work around' the problem by doing this Change my code: Change my unit test for that code Invoke VS2008's 'Build Solution' command With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The problem is that doing a full build solution (even though nothing has changed) takes upwards of 30 seconds, as I have approx 50 C# projects, and VS2008 is not smart enough to realize that only 2 of them need to be looked at. Having to wait 30 seconds just to change 1 line of code and re-run a unit test is abysmal. Is there anything I can do to fix this? None of my code is in the GAC or anything funny like that, it's just ordinary old dll's (buiding against .NET 3.5SP1 on a win7/64bit machine) Please help!

    Read the article

  • IIS 6+ASP.NET - many temp files generated

    - by moshe_ravid
    I have a ASP.NET + some .NET web-services running on IIS 6 (win 2003 server). The issue is that IIS is generating a lot (!) of files in "c:\WINDOWS\Temp" directory. a lot of files means thousands of files, which get to more than 3G of size so far. The files are generated by this command: C:\WINDOWS\SysWOW64\inetsrv "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe" /t:library /utf8output /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\vfagt\113819dd\db0d5802\assembly\dl3\fedc6ef1\006e24d8_3bc9ca01\VfAgentWService.DLL" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Web.Services\2.0.0.0__b03f5f7f11d50a3a\System.Web.Services.dll" /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e089\System.dll" /out:"C:\WINDOWS\TEMP\9i_i2bmg.dll" /debug- /optimize+ /nostdlib /D:_DYNAMIC_XMLSERIALIZER_COMPILATION "C:\WINDOWS\TEMP\9i_i2bmg.0.cs" The files in the temp directory are pairs of *.out & *.err, where the *.err file is zero size, and the *.out file contains the compilation output messages. What is causing IIS to generate so many files? How can I prevent it? UPDATE: The problem is that the command i described above (csc.exe) is being executed many (many) times, causing the .out & .err to be generated so many times, until it consumes the disk space. So - my question is: what is causing this command to run so many times? (i don't have that many .aspx & .asmx files in my web app). Thanks, Moe

    Read the article

  • BasicDBObject or QueryBuilder and some newbie questions of Java and mongo

    - by Kevin Xu
    hi I'm a fresh newbie to mongodb Q1 using query=new BasicDBObject(); query.put("i", new BasicDBObject("$gt",13)); and query=new QueryBuilder().put("i").Greaterthan(13).get() is there any difference inside of the system? Q2 I've created a class class findkv extends BasicDBObject{ //is gt gte lt lte public findkv(String fieldname,String op,Object tvalue) { if (op=="") this.put(fieldname,tvalue); else this.put(fieldname, new BasicDBObject(op,tvalue)); } } shall I use it or shall I just use original function? Q3 I've used mongo shell for a few weeks, and was customed to it, and find writing in mongo shell faster and shorter, which side has more advantage, writing in mongo or in java? I shall dump them from mongo to mysql Q4 I've an if (statement==true) return else dowhat; seems can't be compiled I know I can write if (statement!=true) dowhat else return, but can I still write in first style? q5 my eclipse is Eclipse Java EE IDE for Web Developers. Version: Juno Release Build id: 20120614-1722 I'd like to install Perl which I haven't learned yet I choose Install Update http://e-p-i-c.sf.net/updates/testing but it doesn't work, any method to install perl to eclipse manually?

    Read the article

  • Relational MySQL - fetched properties?

    - by Kelso.b
    I'm currently using the following PHP code: // Get all subordinates $subords = array(); $supervisorID = $this->session->userdata('supervisor_id'); $result = $this->db->query(sprintf("SELECT * FROM users WHERE supervisor_id=%d AND id!=%d",$supervisorID, $supervisorID)); $user_list_query = 'user_id='.$supervisorID; foreach($result->result() as $user){ $user_list_query .= ' OR user_id='.$user->id; $subords[$user->id] = $user; } // Get Submissions $submissionsResult = $this->db->query(sprintf("SELECT * FROM submissions WHERE %s", $user_list_query)); $submissions = array(); foreach($submissionsResult->result() as $submission){ $entriesResult = $this->db->query(sprintf("SELECT * FROM submittedentries WHERE timestamp=%d", $submission->timestamp)); $entries = array(); foreach($entriesResult->result() as $entries) $entries[] = $entry; $submissions[] = array( 'user' => $subords[$submission->user_id], 'entries' => $entries ); $entriesResult->free_result(); } Basically I'm getting a list of users that are subordinates of a given supervisor_id (every user entry has a supervisor_id field), then grabbing entries belonging to any of those users. I can't help but think there is a more elegant way of doing this, like SELECT FROM tablename where user->supervisor_id=2222 Is there something like this with PHP/MySQL? Should probably learn relational databases properly sometime. :(

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

  • How can I generate a FindBugs report that shows me the bugs removed between two revisions in the bug

    - by David Deschenes
    I am attempting to execute a combination of the FindBugs commands filterBugs and convertXmlToText, against a bug database that I created, to generate a report that shows me the all of the bugs removed between two revisions of the system that I am working on. Unfortunately, the resulting report does not show any bug details. It appears that the convertXmlToText throws away all bugs that are dead (aka inactive)... the exact set of bugs that I'd like to see. Below is what I see when I pass the results of the filterBugs command to the mineBugHistory command: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./mineBugHistory seq version time classes NCSS added newCode fixed removed retained dead active 0 r39764 1271169398000 438 74069 0 64 0 0 0 0 64 1 r39921 1271186932000 441 74333 0 0 22 0 42 0 42 2 r40149 1271185876000 449 74636 0 0 3 0 39 22 39 3 r40344 1271180332000 452 74789 0 0 7 0 32 25 32 4 r40558 1271179612000 452 74806 0 0 1 0 31 32 31 5 r40793 1271178818000 464 75610 0 0 20 0 11 33 11 6 r41016 1271176154000 467 75712 0 0 4 0 7 53 7 7 r41303 1271175616000 481 76931 0 0 7 0 0 57 0 8 r41558 1271175026000 486 77793 0 0 0 0 0 64 0 What I'd like to see in the HTML report is the list of the 64 bugs that are shown as active in version r39764 (sequence # 0). Below is the command line that I am using to generate the HTML report: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./convertXmlToText -html:fancy-hist.xsl > ../../../mmfg/bugDB-removed.html

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • RhinoMocks: how to test if method was called when using PartialMock

    - by epitka
    I have a clas that is something like this public class MyClass { public virtual string MethodA(Command cmd) { //some code here} public void MethodB(SomeType obj) { // do some work MethodA(command); } } I mocked MyClass as PartialMock (mocks.PartialMock<MyClass>) and I setup expectation for MethodA var cmd = new Command(); //set the cmd to expected state Expect.Call(MyClass.MethodA(cmd)).Repeat.Once(); Problem is that MethodB calls actual implementation of MethodA instead of mocking it up. I must be doing something wrong (not very experienced with RhinoMocks). How do I force it to mock MetdhodA? Here is the actual code: var cmd = new SetBaseProductCoInsuranceCommand(); cmd.BaseProduct = planBaseProduct; var insuredType = mocks.DynamicMock<InsuredType>(); Expect.Call(insuredType.Code).Return(InsuredTypeCode.AllInsureds); cmd.Values.Add(new SetBaseProductCoInsuranceCommand.CoInsuranceValues() { CoInsurancePercent = 0, InsuredType = insuredType, PolicySupplierType = ppProvider }); Expect.Call(() => service.SetCoInsurancePercentages(cmd)).Repeat.Once(); mocks.ReplayAll(); //act service.DefaultCoInsurancesFor(planBaseProduct); //assert service.AssertWasCalled(x => x.SetCoInsurancePercentages(cmd),x=>x.Repeat.Once());

    Read the article

  • What is the best way to handle the Connections to MySql from c#

    - by srk
    I am working on a c# application which connects to MySql server. There are about 20 functions which will connect to database. This application will be deployed in 200 over machines. I am using the below code to connect to my database which is identical for all the functions. The problem is, i can some connections were not closed and still alive when deployed in 200 over machines. Connection String : <add key="Con_Admin" value="server=test-dbserver; database=test_admindb; uid=admin; password=1Password; Use Procedure Bodies=false;" /> Declaration of the connection string Globally in application [Global.cs] : public static MySqlConnection myConn_Instructor = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); Function to query database : public static DataSet CheckLogin_Instructor(string UserName, string Password) { DataSet dsValue = new DataSet(); //MySqlConnection myConn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); try { string Query = "SELECT accounts.str_nric AS Nric, accounts.str_password AS `Password`," + " FROM accounts " + " WHERE accounts.str_nric = '" + UserName + "' AND accounts.str_password = '" + Password + "\'"; MySqlCommand cmd = new MySqlCommand(Query, Global.myConn_Instructor); MySqlDataAdapter da = new MySqlDataAdapter(); if (Global.myConn_Instructor.State == ConnectionState.Closed) { Global.myConn_Instructor.Open(); } cmd.ExecuteScalar(); da.SelectCommand = cmd; da.Fill(dsValue); Global.myConn_Instructor.Close(); } catch (Exception ex) { Global.myConn_Instructor.Close(); ExceptionHandler.writeToLogFile(System.Environment.NewLine + "Target : " + ex.TargetSite.ToString() + System.Environment.NewLine + "Message : " + ex.Message.ToString() + System.Environment.NewLine + "Stack : " + ex.StackTrace.ToString()); } return dsValue; }

    Read the article

  • urllib2.Request() with data returns empty url

    - by Mr. Polywhirl
    My main concern is the function: getUrlAndHtml() If I manually build and append the query to the end of the uri, I can get the response.url(), but if I pass a dictionary as the request data, the url does not come back. Is there anyway to guarantee the redirected url? In my example below, if thisWorks = True I get back a url, but the returned url is the request url as opposed to a redirect link. On a sidenote, the encoding for .E2.80.93 does not translate to - for some reason? #!/usr/bin/python import pprint import urllib import urllib2 from bs4 import BeautifulSoup from sys import argv URL = 'http://en.wikipedia.org/w/index.php?' def yesOrNo(boolVal): return 'yes' if boolVal else 'no' def getTitleFromRaw(page): return page.strip().replace(' ', '_') def getUrlAndHtml(title, printable=False): thisWorks = False if thisWorks: query = 'title={:s}&printable={:s}'.format(title, yesOrNo(printable)) opener = urllib2.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] response = opener.open(URL + query) else: params = {'title':title,'printable':yesOrNo(printable)} data = urllib.urlencode(params) headers = {'User-agent':'Mozilla/5.0'}; request = urllib2.Request(URL, data, headers) response = urllib2.urlopen(request) return response.geturl(), response.read() def getSoup(html, name=None, attrs=None): soup = BeautifulSoup(html) if name is None: return None return soup.find(name, attrs) def setTitle(soup, newTitle): title = soup.find('div', {'id':'toctitle'}) h2 = title.find('h2') h2.contents[0].replaceWith('{:s} for {:s}'.format(h2.getText(), newTitle)) def updateLinks(soup, url): fragment = '#' for a in soup.findAll('a', href=True): a['href'] = a['href'].replace(fragment, url + fragment) def writeToFile(soup, filename='out.html', indentLevel=2): with open(filename, 'wt') as out: pp = pprint.PrettyPrinter(indent=indentLevel, stream=out) pp.pprint(soup) print('Wrote {:s} successfully.'.format(filename)) if __name__ == '__main__': def exitPgrm(): print('usage: {:s} "<PAGE>" <FILE>'.format(argv[0])) exit(0) if len(argv) == 2: help = argv[1] if help == '-h' or help == '--help': exitPgrm() if False:''' if not len(argv) == 3: exitPgrm() ''' page = 'Led Zeppelin' # argv[1] filename = 'test.html' # argv[2] title = getTitleFromRaw(page) url, html = getUrlAndHtml(title) soup = getSoup(html, 'div', {'id':'toc'}) setTitle(soup, page) updateLinks(soup, url) writeToFile(soup, filename)

    Read the article

  • Is it possible to get the parsed text of a SqlCommand with SqlParameters?

    - by Burg
    What I am trying to do is create some arbitrary sql command with parameters, set the values and types of the parameters, and then return the parsed sql command - with parameters included. I will not be directly running this command against a sql database, so no connection should be necessary. So if I ran the example program below, I would hope to see the following text (or something similar): WITH SomeTable (SomeColumn) AS ( SELECT N':)' UNION ALL SELECT N'>:o' UNION ALL SELECT N'^_^' ) SELECT SomeColumn FROM SomeTable And the sample program is: using System; using System.Data; using System.Data.SqlClient; namespace DryEraseConsole { class Program { static void Main(string[] args) { const string COMMAND_TEXT = @" WITH SomeTable (SomeColumn) AS ( SELECT N':)' UNION ALL SELECT N'>:o' UNION ALL SELECT @Value ) SELECT SomeColumn FROM SomeTable "; SqlCommand cmd = new SqlCommand(COMMAND_TEXT); cmd.CommandText = COMMAND_TEXT; cmd.Parameters.Add(new SqlParameter { ParameterName = "@Value", Size = 128, SqlDbType = SqlDbType.NVarChar, Value = "^_^" }); Console.WriteLine(cmd.CommandText); Console.ReadKey(); } } } Is this something that is achievable using the .net standard libraries? Initial searching says no, but I hope I'm wrong.

    Read the article

  • php form submit and the resend infromation screen

    - by Para
    Hello, I want to ask a best practice question. Suppose I have a form in php with 3 fields say name, email and comment. I submit the form via POST. In PHP I try and insert the date into the database. Suppose the insertion fails. I should now show the user an error and display the form filled in with the data he previously inserted so he can correct his error. Showing the form in it's initial state won't do. So I display the form and the 3 fields are now filled in from PHP with echo or such. Now if I click refresh I get a message saying "Are you sure you want to resend information?". OK. Suppose after I insert the data I don't carry on but I redirect to the same page but with the necessary parameters in the query string. This makes the message go away but I have to carry 3 parameters in the query string. So my question is: How is it better to do this? I want to not carry around lots of parameters in the query string but also not get that error. How can this be done? Should I use cookies to store the form information.

    Read the article

  • Why am I returning empty records when querying in mysql with php?

    - by Brian Bolton
    I created the following script to query a table and return the first 30 results. The query returns 30 results, but they do not have any text or information. Why would this be? The table stores Vietnamese characters. The database is mysql4. Here's the page: http://saomaidanang.com/recentposts.php Here's the code: <?php header( 'Content-Type: text/html; charset=utf-8' ); //CONNECTION INFO $dbms = 'mysql'; $dbhost = 'xxxxx'; $dbname = 'xxxxxxx'; $dbuser = 'xxxxxxx'; $dbpasswd = 'xxxxxxxxxxxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpasswd ) or die('Error connecting to mysql'); mysql_select_db($dbname , $conn); //QUERY $result = mysql_query("SET NAMES utf8"); $cmd = 'SELECT * FROM `phpbb_posts_text` ORDER BY `phpbb_posts_text`.`post_subject` DESC LIMIT 0, 30 '; $result = mysql_query($cmd); ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html dir="ltr"> <head> <title>recent posts</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <p> <?php //DISPLAY while ($myrow = mysql_fetch_row($result)) { echo 'post subject:'; echo(utf8_encode($myrow ['post_subject'])); echo 'post text:'; echo(utf8_encode($myrow ['post_text'])); } ?> </p> </body>

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • Creating reusable chunks of Linq to SQL

    - by tia
    Hi, I am trying to break up horrible linq to sql queries to make them a bit more readable. Say I want to return all orders for product which in the previous year had more than 100 orders. So, original query is: from o in _context.Orders where (from o1 in _context.Orders where o1.Year == o.Year - 1 && o1.Product == o.Product select o1).Count() > 100 select o; Messy. What I'd like to be able to do is to be able to break it down, eg: private IEnumerable<Order> LastSeasonOrders(Order order) { return (from o in _context.Orders where o.Year == order.Year - 1 && o.Product == order.Product select o); } which then lets me change the original query to: from o in _context.Orders where LastSeasonOrders(o).Count() > 100 select o; This doesn't seem to work, as I get method Blah has no supported translation to SQL when the query is run. Any quick tips on the correct way to achieve this?

    Read the article

  • How to use IO.popen to write-to and read-from a child process?

    - by mackenir
    I am running net share from a ruby script to delete a windows network share. If files on the share are in use, net share will ask the user if they want to go ahead with the deletion, so my script needs to inspect the output from the command, and write out Y if it detects that net share is asking it for input. In order to be able to write out to the process I open it with access flags "r+". When attempting to write to the process with IO#puts, I get an error: Errno::EPIPE: Broken pipe What am I doing wrong here? (The error occurs on the line net_share.puts "Y") (The question text written out by net share is not followed by a newline, so I am using IO#readpartial to read in the output.) def delete_network_share share_path command = "net share #{share_path} /DELETE" net_share = IO.popen(command, "r+") text = "" while true begin line = net_share.readpartial 1000 #read all the input that's available rescue EOFError break end text += line if text.scan("Do you want to continue this operation? (Y/N)").size > 0 net_share.puts "Y" net_share.flush #probably not needed? end end net_share.close end

    Read the article

  • INSERT INTO table doesn't work???

    - by Joann
    I found a tutorial from nettuts and it has a source code in it so tried implementing it in my site.. It is working now. However, it doesn't have a Registration system so I am making one. The thing is, as I have expected, my code is not working... It doesn't seem to know how to INSERT into the database. Here's the function that inserts data into the db. function register_User($un, $email, $pwd) { $query = "INSERT INTO users( username, password, email ) VALUES(:uname, :pwd, :email) LIMIT 1"; if($stmt = $this->conn->prepare($query)) { $stmt->bind_param(':uname', $un); $stmt->bind_param(':pwd', $pwd); $stmt->bind_param(':email', $email); $stmt->execute(); if($stmt->fetch()) { $stmt->close(); return true; } else return "The username or email you entered is already in use..."; } } I have debugged the connection to the database from within the class, it says it's connected. I tried using this method instead: function register($un, $email, $pwd) { $registerquery = $this->conn->query( "INSERT INTO users(uername, password, email) VALUES('".$un."', '".$pwd."', '".$email."')"); if($registerquery) { echo "<h4>Success</h4>"; } else { echo "<h4>Error</h4>"; } } And it echos "Error"... Can you please help me pen point the error in this??? :(

    Read the article

  • Select statement with multiple 'where' fields using same value without duplicating text

    - by kdbdallas
    I will start by saying that I don't think what I want can be done, but that said, I am hoping I am wrong and someone knows more than me. So here is your chance... Prove you are smarter than me :) I want to do a search against a SQLite table looking for any records that "are similar" without having to write out the query in long hand. To clarify this is how I know I can write the query: select * from Articles where title like '%Bla%' or category like '%Bla%' or post like '%Bla%' This works and is not a huge deal if you are only checking against a couple of columns, but if you need to check against a bunch then your query can get really long and nasty looking really fast, not to mention the chance for typos. (ie: 'Bla%' instead of '%Bla%') What I am wondering is if there is a short hand way to do this? *This next code does not work the way I want, but just shows kind of what I am looking for select * from Articles where title or category or post like '%Bla%' Anyone know if there is a way to specify that multiple 'where' columns should use the same search value without listing that same search value for every column? Thanks in advance!

    Read the article

  • shell script filter du and find by a string inside a file in a subfolder

    - by Jason
    I have the following command that I run on cygwin: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | xargs du --max-depth=0 > foldersizesreport.csv I intended to do the following with this command: for each folder under /d/tmp/ that was modified in last 150 days, check its total size including files within it and report it to file foldersizesreport.csv however that is now not good enough for me, as it turns out inside each /d/tmp/subfolder1/somefile.properties /d/tmp/subfolder2/somefile.properties /d/tmp/subfolder3/somefile.properties /d/tmp/subfolder4/somefile.properties so as you see inside each subfolderX there is a file named somefile.properties inside it there is a property SOMEPROPKEY=3808612800100 (among other properties) this is the time in millisecond, i need to change the command so that instead of -mtime -150 it will include in the whole calculation only subfolderX that has a file inside them somefile.properties where the SOMEPROPKEY=3808612800100 is the time in millisecond in future, if the value SOMEPROPKEY=23948948 is in past then dont at all include the folder in the foldersizesreport.csv because its not relevant to me. so the result report should be looking like: /d/tmp/,subfolder1,<itssizein KB> /d/tmp/,subfolder2,<itssizein KB> and if subfolder3 had a SOMEPROPKEY=34243234 (time in ms in past) then it would not be in that csv file. so basically I'm looking for: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | <only subfolders that have in them property in file SOMEPROPKEY=28374874827 - time in ms in future and not in past | xargs du --max-depth=0 > foldersizesreport.csv

    Read the article

  • MS Access Bulk Insert exits app

    - by Brij
    In the web service, I have to bulk insert data in MS Access database. First, There was single complex Insert-Select query,There was no error but app exited after inserting some records. I have divided the query to make it simple and using linq to remove complexity of query. now I am inserting records using for loop. there are approx 10000 records. Again, same problem. I put a breakpoint after loop, but No hit. I have used try catch, but no error found. For Each item In lstSelDel Try qry = String.Format("insert into Table(Id,link,file1,file2,pID) values ({0},""{1}"",""{2}"",""{3}"",{4})", item.WebInfoID, item.Links, item.Name, item.pName, pDateID) DAL.ExecN(qry) Catch ex As Exception Throw ex End Try Next Public Shared Function ExecN(ByVal SQL As String) As Integer Dim ret As Integer = -1 Dim nowConString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + System.AppDomain.CurrentDomain.BaseDirectory + "DataBase\\mydatabase.mdb;" Dim nowCon As System.Data.OleDb.OleDbConnection nowCon = New System.Data.OleDb.OleDbConnection(nowConString) Dim cmd As New OleDb.OleDbCommand(SQL, nowCon) nowCon.Open() ret = cmd.ExecuteNonQuery() nowCon.Close() nowCon.Dispose() cmd.Dispose() Return ret End Function After exiting app, I see w3wp.exe uses more than 50% of Memory. Any idea, What is going wrong? Is there any limitation of MS Access?

    Read the article

< Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >