Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 419/619 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • Using AJAX in Rails: How do I change a button as soon as it's clicked?

    - by sdc
    Hey! I'm teaching myself Ruby, and have been stuck on this for a couple days: I'm currently using MooTools-1.3-compat and Rails 3. I'd like to replace one button (called "Follow") with another (called "Unfollow") as soon as someone clicks on it. I'm using :remote = true and have a file ending in .js.erb that's being called...I just need help figuring out what goes in this .js file The "Follow" button is in a div with id="follow_form", but there are many buttons on the page, and they all have an id = "follow_form"...i.e. $("follow_form").set(...) replaces the first element and that's not correct. I need help replacing the button that made the call. I looked at this tutorial, but the line below doesn't work for me. Could it be because I'm using MooTools instead of Prototype? $("follow_form").update("<%= escape_javascript(render('users/unfollow')) %") ps. This is what I have so far, and this works: in app/views/shared: <%= form_for current_user.subscriptions.build(:event => @event), :remote => true do |f| %> <div><%= f.hidden_field :event %></div> <div class="actions"><%= f.submit "Follow" %></div> <% end %> in app/views/events/create.js.erb alert("follow!"); //Temporary...this is what I'm trying to replace *in app/controllers/subscriptions_controller.rb* def create @subscription = current_user.subscriptions.build(params[:subscription]) @subscription.save respond_to do |format| format.html { redirect_to(..) } format.js {render :layout} end Any help would be greatly, greatly appreciated!

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • Cannot implicitly convert type ...

    - by Newbie
    I have the following function public Dictionary<DateTime, object> GetAttributeList( EnumFactorType attributeType ,Thomson.Financial.Vestek.Util.DateRange dateRange) { DateTime startDate = dateRange.StartDate; DateTime endDate = dateRange.EndDate; return (( //Step 1: Iterate over the attribute list and filter the records by // the supplied attribute type from assetAttribute in AttributeCollection where assetAttribute.AttributeType.Equals(attributeType) //Step2:Assign the TimeSeriesData collection into a temporary variable let timeSeriesList = assetAttribute.TimeSeriesData //Step 3: Iterate over the TimeSeriesData list and filter the records by // the supplied date from timeSeries in timeSeriesList.ToList() where timeSeries.Key >= startDate && timeSeries.Key <= endDate //Finally build the needed collection select new AssetAttribute() { TimeSeriesData = PopulateTimeSeriesData(timeSeries.Key, timeSeries.Value) }).ToList<AssetAttribute>().Select(i => i.TimeSeriesData)); } private Dictionary<DateTime, object> PopulateTimeSeriesData(DateTime dateTime, object value) { Dictionary<DateTime, object> timeSeriesData = new Dictionary<DateTime, object>(); timeSeriesData.Add(dateTime, value); return timeSeriesData; } Error:Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Collections.Generic.Dictionary'. An explicit conversion exists (are you missing a cast?) Using C#3.0 Please help

    Read the article

  • is it possible to make one click fires two events or more by javascript?

    - by NewInAlbert
    I am currently making a temporary download page for website visitor. The page includes a form, after the visitor fills the form up, the site will take them to the pdf download page. In the download page, there are some pdf files download links (I am just using a tag.). However, i wanna make a onclick event to those links, once they have been clicked, the page will refresh automatically or redirect to other pages. <a href="/file.pdf" onClick="window.location.reload()">The File</a> I have tried the jquery way as well. <a href="/file.pdf" id="FileDownload">The File</a> <script> $("#FileDownload").click(function(){ location.reload(); }); </script> But all the them are not working. Do you masters have any good ideas about this, many thanks. P.S. What if I wanna add a countdown after a file is being started download, and then do page reload when countdown finishes. Looks like have asked several questions... Thanks a ton in advance.

    Read the article

  • Configuration Error , Finding assembly after I swapped referenced dll out. Visual Studio 2003

    - by TampaRich
    Here is the situation. I had a clean build of my asp.net web application working. I then went into the bin folder under the web app and replaced two referenced dll's with two older version of the same dll's. (Same name etc.) After testing I replaced those dll's back to the new ones and now my application keeps throwing the configuration error === Pre-bind state information === LOG: DisplayName = xxxxx.xxxx.Personalization (Partial) LOG: Appbase = file:///c:/inetpub/wwwroot/appname LOG: Initial PrivatePath = bin Calling assembly : (Unknown). LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). I found this issue on the web and tried all the solutions to it but nothing worked. I then went into all my projects that it references under the solution and cleared out the bin/debug folder in each, I cleared out the obj folder under each and also deleted the temporary files associated with the application. I rebuilt it and it still will not work due to this error Not sure what is causing this or how to fix this issue. I have tried restarting IIS, stopping index services which was said to be a known issue. This is .net framework 1.1 app and visual studio 2003 Any suggestions would be great. Thanks.

    Read the article

  • Strange compilation error on reference passing argument to function

    - by Grewdrewgoo Goobergabbsoen
    Here's the code: #include <iostream> using namespace std; void mysize(int &size, int size2); int main() { int *p; int val; p = &val; cout << p; mysize(&val, 20); // Error is pointed here! } void mysize(int &size, int size2) { cout << sizeof(size); size2 = size2 + 6000; cout << size2; } Here's the error output from GCC: In function 'int main()': Line 10: error: invalid initialization of non-const reference of type 'int&' from a temporary of type 'int*' compilation terminated due to -Wfatal-errors. What does that imply? I do not understand the error message ... invalid initialization of a non-constant? I declared the prototype function above with two parameters to take, one a reference of an integer and one just an integer value itself. I passed the reference of the int (see line 10), yet this error keeps being thrown at me. What is the issue?

    Read the article

  • Copy/publish images linked from the html files to another server and update the HTML files referenci

    - by Phil
    I am publishing content from a Drupal CMS to static HTML pages on another domain, hosted on a second server. Building the HTML files was simple (using PHP/MySQL to write the files). I have a list of images referenced in my HTML, all of which exist below the /userfiles/ directory. cat *.html | grep -oE [^\'\"]+userfiles[\/.*]*/[^\'\"] | sort | uniq Which produces a list of files http://my.server.com/userfiles/Another%20User1.jpg http://my.server.com/userfiles/image/image%201.jpg ... My next step is to copy these images across to the second server and translate the tags in the html files. I understand that sed is probably the tool I would need. E.g.: sed 's/[^"]\+userfiles[\/image]\?\/\([^"]\+\)/\/images\/\1/g' Should change http://my.server.com/userfiles/Another%20User1.jpg to /images/Another%20User1.jpg, but I cannot work out exactly how I would use the script. I.e. can I use it to update the files in place or do I need to juggle temporary files, etc. Then how can I ensure that the files are moved to the correct location on the second server

    Read the article

  • Problem with configure script

    - by cube
    I am running into a problem with the ./configure script for ffmpeg. My linux environment uses busybox, which only allows for limited set of linux commands. One command which is used in the ffmpeg ./configure script is mktemp -u, the problem here is the busybox for linux does not recognize the -u switch as valid, so it complains about it and breaks the configure process. This is the relevant code in ./configure which uses the mktemp -u command: if ! check_cmd type mktemp; then # simple replacement for missing mktemp # NOT SAFE FOR GENERAL USE mktemp(){ echo "${2%XXX*}.${HOSTNAME}.${UID}.$$" } fi tmpfile(){ tmp=$(mktemp -u "${TMPDIR}/ffconf.XXXXXXXX")$2 && (set -C; exec > $tmp) 2>/dev/null || die "Unable to create temporary file in $TMPDIR." append TMPFILES $tmp eval $1=$tmp } I am not good with bash scripting at all, so I was wondering if anyone one had an idea on how I can force this configure script to not use mktemp -u and use the 'replacement' alternative option that is available in as per the snippet above. Thanks. btw... simply removing the -u switch does not work. Nor does replacing it with -t, or -p. I believe the mktemp has to be bypassed completely.

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • Sheet and thread memory problem

    - by Xident
    Hi Guys, recently I started a project which can export some precalculated Grafix/Audio to files, for after processing. All I was doing is to put a new Window (with progressindicator and an Abort Button) in my main xib and opened it using the following code: [NSApp beginSheet: REC_Sheet modalForWindow: MOTHER_WINDOW modalDelegate: self didEndSelector: nil contextInfo: nil]; NSModalSession session=[NSApp beginModalSessionForWindow:REC_Sheet]; RECISNOTDONE=YES; while (RECISNOTDONE) { if ([NSApp runModalSession:session]!=NSRunContinuesResponse) break; usleep(100); } [NSApp endModalSession:session]; A Background Thread (pthread) was started earlier, to actually perform the work and save all the targas/wave file. Which worked great, but after an amount of time, it turned out that the main thread was not responding anymore and my memory footprint raised unstoppable. I tried to debug it with Instruments, and saw a lot of CFHash etc stuff growing to infinity. By accident i clicked below the sheet, and temporary it helped, the main thread (AppKit ?) was releasing it's stuff, but just for a little time. I can't explain it to me, first of all I thought it was the access from my thread to the Progressbar to update the Progress (intervalled at 0,5sec), so I cut it out. But even if I'm not updating anything and did nothing with the Progressbar, my Application eat up all the Memory, because of not releasing it's "Main-Event" or whatsoever Stuff. Is there any possibility to "drain" this Main thread Memory stuff (Runloop / NSApp call?). And why the heck doesn't the Main thread respond anymore (after this simple task) ??? I don't have a clou anymore, please help ! Thanks in advance ! P.S. How do you guys implement "threaded long task" Stuff and updating your gui ???

    Read the article

  • Find all A^x in a given range

    - by Austin Henley
    I need to find all monomials in the form AX that when evaluated falls within a range from m to n. It is safe to say that the base A is greater than 1, the power X is greater than 2, and only integers need to be used. For example, in the range 50 to 100, the solutions would be: 2^6 3^4 4^3 My first attempt to solve this was to brute force all combinations of A and X that make "sense." However this becomes too slow when used for very large numbers in a big range since these solutions are used in part of much more intensive processing. Here is the code: def monoSearch(min, max): base = 2 power = 3 while 1: while base**power < max: if base**power > min: print "Found " + repr(base) + "^" + repr(power) + " = " + repr(base**power) power = power + 1 base = base + 1 power = 3 if base**power > max: break I could remove one base**power by saving the value in a temporary variable but I don't think that would make a drastic effect. I also wondered if using logarithms would be better or if there was a closed form expression for this. I am open to any optimizations or alternatives to finding the solutions.

    Read the article

  • In Rails, how to respect :scope when using validates_uniqueness_of in an embedded object form?

    - by mkirk
    I have a Book model, which has_many Chapters (which belong_to a Book). I want to ensure uniqueness of Chapter titles, but only within the scope of a single book. The catch is that the form for creating chapters is embedded in the Book model's form (The Book model accepts_nested_attributes_for :chapters). Within the Chapter model: validates_uniqueness_of( :chapter_title, :scope = :book_id, :case_sensitive = false, :message = "No book can have multiple chapters with the same title.") However, when I submit the Book creation form (which also includes multiple embedded Chapter forms), if the chapter title exists in another chapter for a different book, I fail the validation test. Book.create( :chapters => [ Chapter.new(:title => "Introduction"), Chapter.new(:title => "How to build things") => Book 1 successfully created Book.create( :chapters => [ Chapter.new(:title => "Introduction"), Chapter.new(:title => "Destroy things") => Book 2 fails to validate second_book = Book.create( :chapters => [ Chapter.new(:title => "A temporary Introduction title"), Chapter.new(:title => "Destroy things") => Book 2 succesfully created second_book.chapters[0].title= "Introduction" => success second_book.chapters.save => success second_book.save => success Can anyone shed some light on how to do this? Or why it's happening?

    Read the article

  • First Time Working With Others?

    - by cam
    I've been at my very first programming job for about 8 months now and I've learned incredible amounts so far. Unfortunately, I'm the sole developer for a small startup company for internal applications. For the first time ever though, I'll be handing off some of my projects to someone else when I leave this job. I've documented all my projects thoroughly (at least I think so), but I still feel nervous about someone else reading my code. For example, I've always done this sort of thing. for (int i = 0; i < blah.length; i++) { //Do stuff } Should I name 'i' something descriptive? It's only a temporary variable, and will only exist within that loop, and it seems that it's pretty obvious what the loop does with 'i'. This is just one example. Another one is that I name variables differently... I don't really conform to a standard of naming besides starting all private members with an underscore. Are there any resources that could show me how to make it easier for the next developer? Are there standards for this type of thing?

    Read the article

  • SHGetFolderPath

    - by user530589
    This code works for windows 7 but doesn't work for windows XP (outputs only part of startup folder path) #include <iostream> #include <shlobj.h> using namespace std; int main() { wchar_t startupFolder[1024]; HRESULT hr = SHGetFolderPath(0, CSIDL_STARTUP, 0, 0, startupFolder); if (SUCCEEDED(hr)) wcout << L"Startup folder = " << startupFolder << endl; else cout << "Error when getting startup folder\n"; getchar(); return 0; } output is: Startup folder = C:\Documents and Settings\Admin\ <- cursor is here. Newline is not provided. Also I have russian window xp. I think this is unicode issue. when I use wprintf I got: C:\Documents and Settings\Admin\???????? ..... Thanks. As a temporary solution: After SHGetFolderPath I call GetShortPathName then I get path in msdos style: C:\DOCUME~1\Admin\5D29~1\4A66~1\60C2~1 Not really beautiful solution, but at least that is a valid path.

    Read the article

  • MYSQL variables - SET @var

    - by Lizard
    I am attempting to create a mysql snippet that will analyse a table and remove duplicate entries (duplicates are based on two fields not entire record) I have the following code that works when I hard code the variables in the queries, but when I take them out and put them as variables I get mysql errors, below is the script SET @tblname = 'mytable'; SET @fieldname = 'myfield'; SET @concat1 = 'checkfield1'; SET @concat2 = 'checkfield2'; ALTER TABLE @tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL; UPDATE @tblname SET `tmpcheck` = CONCAT(@concat1,'-',@concat2); CREATE TEMPORARY TABLE `tmp_table` ( `tmpfield` VARCHAR( 100 ) NOT NULL ) ENGINE = MYISAM ; INSERT INTO `tmp_table` (`tmpfield`) SELECT @fieldname FROM @tblname GROUP BY `tmpcheck` HAVING ( COUNT(`tmpcheck`) > 1 ); DELETE FROM @tblname WHERE @fieldname IN (SELECT `tmpfield` FROM `tmp_table`); ALTER TABLE @tblname DROP `tmpcheck`; I am getting the following error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL' at line 1 Is this because I can't use a variable for a table name? What else could be wrong or how wopuld I get around this issue. Thanks in adavnce

    Read the article

  • Select Query Joined on Two Fields?

    - by btollett
    I've got a few tables in an access database: ID | LocationName 1 | Location1 2 | Location2 ID | LocationID | Date | NumProductsDelivered 1 | 1 | 12/10 | 3 2 | 1 | 01/11 | 2 3 | 1 | 02/11 | 2 4 | 2 | 11/10 | 1 5 | 2 | 12/10 | 1 ID | LocationID | Date | NumEmployees | EmployeeType 1 | 1 | 12/10 | 10 | 1 (=Permanent) 2 | 1 | 12/10 | 3 | 2 (=Temporary) 3 | 1 | 12/10 | 1 | 3 (=Support) 4 | 2 | 10/10 | 1 | 1 5 | 2 | 11/10 | 2 | 1 6 | 2 | 11/10 | 1 | 2 7 | 2 | 11/10 | 1 | 3 8 | 2 | 12/10 | 2 | 1 9 | 2 | 12/10 | 1 | 3 What I want to do is pass in the LocationID as a parameter and get back something like the following table. So, if I pass in 2 as my LocationID, I should get: Date | NumProductsDelivered | NumPermanentEmployees | NumSupportEmployees 10/10 | | 1 | 11/10 | 1 | 2 | 1 12/10 | 1 | 2 | 1 It seems like this should be a pretty simple query. I really don't even need the first table except as a way to fill in the combo box on the form from which the user chooses which location they want a report for. Unfortunately, everything I've done has resulted in me getting a lot more data than I should be getting. My confusion is in how to set up the join (presumably that's what I'm looking for here) given that I want both the date and locationID to be the same for each row in the result set. Any help would be much appreciated. Thanks.

    Read the article

  • LinQ optimization

    - by Budda
    Here is a peace of code: void MyFunc(List<MyObj> objects) { MyFunc1(objects); foreach( MyObj obj in objects.Where(obj1=>obj1.Good)) { // Do Action With Good Object } } void MyFunc1(List<MyObj> objects) { int iGoodCount = objects.Where(obj1=>obj1.Good).Count(); BeHappy(iGoodCount); // do other stuff with 'objects' collection } Here we see that collection is analyzed twice and each time the value of 'Good' property is checked for each member: 1st time when calculating count of good objects, 2nd - when iterating through all good objects. It is desirable to have that optimized, and here is a straightforward solution: before call to MyFunc1 makecreate an additional temporary collection of good objects only (goodObjects, it can be IEnumerable); get count of these objects and pass it as an additional parameter to MyFunc1; in the 'MyFunc' method iterate not through 'objects.Where(...)' but through the 'goodObjects' collection. Not too bad approach (as far as I see), but additional parameter is required to be passed. Question: is there any LinQ out-of-the-box functionality that allows any caching during 1st Where().Count(), remembering a processed collection and use it in the next iteration? Any thoughts are welcome. Thanks.

    Read the article

  • Is this an error in "More Effective C++" in Item28?

    - by particle128
    I encountered a question when I was reading the item28 in More Effective C++ .In this item, the author shows to us that we can use member template in SmartPtr such that the SmartPtr<Cassette> can be converted to SmartPtr<MusicProduct>. The following code is not the same as in the book,but has the same effect. #include <iostream> class Base{}; class Derived:public Base{}; template<typename T> class smart{ public: smart(T* ptr):ptr(ptr){} template<typename U> operator smart<U>() { return smart<U>(ptr); } ~smart(){delete ptr;} private: T* ptr; }; void test(const smart<Base>& ) {} int main() { smart<Derived> sd(new Derived); test(sd); return 0; } It indeed can be compiled without compilation error. But when I ran the executable file, I got a core dump. I think that's because the member function of the conversion operator makes a temporary smart, which has a pointer to the same ptr in sd (its type is smart<Derived>). So the delete directive operates twice. What's more, after calling test, we can never use sd any more, since ptr in sd has already been delete. Now my questions are : Is my thought right? Or my code is not the same as the original code in the book? If my thought is right, is there any method to do this? Thanks very much for your help.

    Read the article

  • Passing huge amounts of data as an hexadecimal (0x123AB...) parameter of a clr stored procedure in s

    - by user193655
    I post this question has followup of This question, since the thread is not recieving more answers. I'm trying to understand if it is possible to pass as a parameter of a CLR stored procedure a large amount of data as "0x5352532F...". This is to avoid to send the data directly to the CLR stored procedure, instead of sending ti to a temporary DB field and from there passing it as varbinary(max) parmeter to the CLR stored procedure. I have a triple question: 1) is it possible, if yes how? Let's say i want to pass a pdf file to the CLR stored procedure (not the path, the full bits that make up the file). Something like: exec MyCLRStoredProcs.dbo.insertfile @file_remote_path ='c:\temp\test_file.txt' , @file_contents=0x4D5A90000300000004000.... --(this long list is the file content) where insertfile is a stored proc that writes to the server path (at file_remote_path) the binary data I pass as (file_contents). 2) is it there corruption risk of adopting this approach (or it is the same approach that sql server uses behind the scenes)? 3) how to convert the content of a file into the "0x23423..." hexadecimal representation

    Read the article

  • Storing data in XML or MongoDB

    - by user766473
    Here is my usecase. 1.Have some data, which I am storing now in the xml files. The data that I am storing is not persistent i.e I would be deleting the user data once the user logs out. 2.My server communicates with the client using the XML requests and responses. So initially we decided, since we are sending the XML as response, lets store it in XML so that conversion from database to XML format time is saved. 3.Client will request for XML based on some filter conditions. So will have to use XQUERY. 4.Maximum of 100 entries will be there in an XML, atleast as of now. Now I would like to hear some advice on whether I should use XML or mongodb. My Concerns : 1. How good is it to store temporary data in mongodb and delete/take backup once done with session 2. Conversion from mongodb json format to XML. 3. Handling the changes in the schema design. Cant use any other DB than mongodb. As some persistent operation or still done on mongodb. Thanks in advance.

    Read the article

  • Why could "insert (...) values (...)" not insert a new row?

    - by nang
    Hi, I have a simple SQL insert statement of the form: insert into MyTable (...) values (...) It is used repeatedly to insert rows and usually works as expected. It inserts exactly 1 row to MyTable, which is also the value returned by the Delphi statement AffectedRows:= myInsertADOQuery.ExecSQL. After some time there was a temporary network connectivity problem. As a result, other threads of the same application perceived EOleExceptions (Connection failure, -2147467259 = unspecified error). Later, the network connection was reestablished, these threads reconnected and were fine. The thread responsible for executing the insert statement described above, however, did not perceive the connectivity problems (No exceptions) - probably it was simply not executed while the network was down. But after the network connectivity problems myInsertADOQuery.ExecSQL always returned 0 and no rows were inserted to MyTable anymore. After a restart of the application the insert statement worked again as expected. For SQL Server, is there any defined case where an insert statment like the one above would not insert a row and return 0 as the number of affected rows? Primary key is an autogenerated GUID. There are no unique or check constraints (which should result in an exception anyway rather than not inserting a row). Are there any known ADO bugs (Provider=SQLOLEDB.1)? Any other explanations for this behaviour? Thanks, Nang.

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >