Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 571/701 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • Generic property- specify the type at runtime

    - by Lirik
    I was reading a question on making a generic property, but I'm a little confused at by the last example from the first answer (I've included the relevant code below): You have to know the type at compile time. If you don't know the type at compile time then you must be storing it in an object, in which case you can add the following property to the Foo class: public object ConvertedValue { get { return Convert.ChangeType(Value, Type); } } That's seems strange: it's converting the value to the specified type, but it's returning it as an object when the value was stored as an object. Doesn't the returned object still require un-boxing? If it does, then why bother with the conversion of the type? I'm also trying to make a generic property whose type will be determined at run time: public class Foo { object Value {get;set;} Type ValType{get;set;} Foo(object value, Type type) { Value = value; ValType = type; } // I need a property that is actually // returned as the specified value type... public object ConvertedValue { get { return Convert.ChangeType(Value, ValType); } } } Is it possible to make a generic property? Does the return property still require unboxing after it's accessed?

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • xml to hashmap - php

    - by csU
    Hi, Given an xml structure like this <gesmes:Envelope> <gesmes:subject>Reference rates</gesmes:subject> <gesmes:Sender> <gesmes:name>European Central Bank</gesmes:name> </gesmes:Sender> <Cube> <Cube time="2010-03-26"> <Cube currency="USD" rate="1.3353"/> <Cube currency="JPY" rate="124.00"/> <Cube currency="BGN" rate="1.9558"/> <Cube currency="CZK" rate="25.418"/> ... ... </Cube> </Cube> </gesmes:Envelope> how can i go about getting the values stored in to a hashmap or similar structure in php? Have been trying to do this for the last few hours now but cant manage it :D It is homework so i guess no full solutuins please( tho the actual assignment is to use the web services, i am just stuck with parsing it :D ). Maybe someone could show me a brief example for a made up xml file that i could apply to mine? Thanks

    Read the article

  • Mmap and structure

    - by blid..pl
    I'm working some code including communication between processes, using semaphores. I made structure like this: typedef struct container { sem_t resource, mutex; int counter; } container; and use in that way (in main app and the same in subordinate processes) container *memory; shm_unlink("MYSHM"); //just in case fd = shm_open("MYSHM", O_RDWR|O_CREAT|O_EXCL, 0); if(fd == -1) { printf("Error"); exit(EXIT_FAILURE); } memory = mmap(NULL, sizeof(container), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); ftruncate(fd, sizeof(container)); Everything is fine when I use one of the sem_ functions, but when I try to do something like memory->counter = 5; It doesn't work. Probably I got something wrong with pointers, but I tried almost everything and nothing seems to work. Maybe there's a better way to share variables, structures etc between processes ? Unfortunately I'm not allowed to use boost or something similiar, the code is for educational purposes and I'm intentend to keep as simple as it's possible.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • mod_rewrite if file exists

    - by Mathieu Parent
    Hi everyone, I already have two rewrite rules that work correctly for now but some more code has to be added to work perfectly. I have a website hosted at mydomain.com and all subdom.mydomain.com are rewrited to mydomain.com/subs/subdom . My CMS has to handle the request if the file being reached does not exist, the rewrite is done like so: RewriteCond $1 !^subs/ RewriteCond %{HTTP_HOST} ^([^.]+)\.mydomain\.com$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ subs/%1/index.php?page=$1 [L] My CMS handles the next part of the parsing as usual. The problem is if a file really exists, I need to link to it without passing through my CMS, I managed to do it like this: RewriteCond $1 !^subs/ RewriteCond %{HTTP_HOST} ^([^.]+)\.mydomain\.com$ RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*)$ subs/%1/$1 [L] So far it seems to work like a charm. Now I am being picky and I need to have default files that are stored in subs/default/. If the file exists in the subdomain folder, we should grab this one but if not, we need to get the file from the default subdomain. And if the file does not exist anywhere, we should be using the 404 page from the current subdomain unless there is none. I hope it describes well enough. Thank you for your time!

    Read the article

  • Running mysql query using node blocks the whole process and then timesout

    - by lobengula3rd
    I have a node javascript that uses mysql npm (Felix). I have a procedure stored in my DB which I call when the user selects an option to kind of create its own instance of the program. The user chooses for how long he wants that data to be initialized for him. This is suppsoed to be between 1 and 2 years. So if he choose 1 year this query will insert around 20,000 rows into 1 table. If I run this query and a local DB this takes around 30 seconds (I suppose it is reasonable because its a big query which should be done only once in 1 or 2 years so its ok). For some reason my node script freezes as if it can't handle any more calls from other users. The even worse problem is that after like 2 minutes my client ui gets like an error from the server. At this point not all the data that was supposed to enter the DB is entered. After waiting like another minute all the data finally gets to the DB and only then it will accept new requests. This is my connection: this.connection = mysql.createConnection({ host : '********rds.amazonaws.com', user : 'admin', password : '******', database : '*****' }); and this is my query function: this.createCourts = function (req, res, next){ connection.query('CALL filldates("' + req.body['startDate'] + '","' + req.body['endDate'] + '","' + req.body['numOfCourts'] + '","' + req.body['duration'] + '","' + req.body['sundayOpen'] + '","' + req.body['mondayOpen'] + '","' + req.body['tuesdayOpen'] + '","' + req.body['wednesdayOpen'] + '","' + req.body['thursdayOpen'] + '","' + req.body['fridayOpen'] + '","' + req.body['saturdayOpen'] + '","' + req.body['sundayClose'] + '","' + req.body['mondayClose'] + '","' + req.body['tuesdayClose'] + '","' + req.body['wednesdayClose'] + '","' + req.body['thursdayClose'] + '","' + req.body['fridayClose'] + '","' + req.body['saturdayClose'] + '");', function(err){ if (err){ console.log(err); } else return res.send(200); }); }; what am i missing here? as i understand connection.query should by async so why is it actually blocking my node script? thanks.

    Read the article

  • Creating a [materialised]view from generic data in Oracle/Mysql

    - by Andrew White
    I have a generic datamodel with 3 tables CREATE TABLE Properties ( propertyId int(11) NOT NULL AUTO_INCREMENT, name varchar(80) NOT NULL ) CREATE TABLE Customers ( customerId int(11) NOT NULL AUTO_INCREMENT, customerName varchar(80) NOT NULL ) CREATE TABLE PropertyValues ( propertyId int(11) NOT NULL, customerId int(11) NOT NULL, value varchar(80) NOT NULL ) INSERT INTO Properties VALUES (1, 'Age'); INSERT INTO Properties VALUES (2, 'Weight'); INSERT INTO Customers VALUES (1, 'Bob'); INSERT INTO Customers VALUES (2, 'Tom'); INSERT INTO PropertyValues VALUES (1, 1, '34'); INSERT INTO PropertyValues VALUES (2, 1, '80KG'); INSERT INTO PropertyValues VALUES (1, 2, '24'); INSERT INTO PropertyValues VALUES (2, 2, '53KG'); What I would like to do is create a view that has as columns all the ROWS in Properties and has as rows the entries in Customers. The column values are populated from PropertyValues. e.g. customerId Age Weight 1 34 80KG 2 24 53KG I'm thinking I need a stored procedure to do this and perhaps a materialised view (the entries in the table "Properties" change rarely). Any tips?

    Read the article

  • JQuery - Fade in new Background Image?

    - by JasonS
    Hi, I think I may be trying something that isn't possible. I was recently tasked to create a html version of the flash site. On the flash site when you click on the body the image changes. I have done this with JQuery. Its brilliant! However.. it isn't "flash" enough. The powers that be want a fade effect between images. This is where I have come completely undone. This is how my script works at the moment. Images are stored in a div called photos. This is hidden. <div id="photos"> <img src="image.jpg" alt="Some caption" style="#page-bg-color" /> </div> When the page is loaded jquery checks to see if the first image is loaded. If it isn't then a loading symbol spins. When the image is loaded, it becomes the body background. $("body").css('background', 'url(' + photos[currentImage]["url"] + ') no-repeat 50% 50% fixed ' + photos[currentImage]["background"]); I have tried the following. I have tried animate. However, this doesn't work. I have tried this plugin: http://plugins.jquery.com/project/bgImageTransition This doesn't work either. It appears to clone the tag and do something. I assume it isn't working because you cannot clone the body tag. That or it is really old and is no longer compatible with the current version of JQuery. I fear that I have coded my way down a dead end. Does anybody know how I could make this work?

    Read the article

  • Calculate the year for ending month/day?

    - by Dave Jarvis
    Given: Start Year Start Month & Start Day End Month & End Day What SQL statement results in TRUE if a date lands between the Start and End days? 1st example: Start Date = 11-22 End Date = 01-17 Start Year = 2009 Specific Date = 2010-01-14 TRUE 2nd example: Start Date = 11-22 End Date = 11-16 Start Year = 2009 Specific Date = 2010-11-20 FALSE 3rd example: Start Date = 02-25 End Date = 03-19 Start Year = 2004 Specific Date = 2004-02-29 TRUE I was thinking of using the MySQL functions datediff and sign plus a CASE condition to determine whether the year wraps, but it seems rather expensive. Am looking for a simple, efficient calculation. Update 1 The problem is the end date cannot simply use the year. The year must be increased if the end month/day combination happens before the start date. The start date is easy: Start Date = date( concat_ws( '-', year, Start Month, Start Day ) ) The end date is not so simple. Update 2 Here is what I was thinking about for obtaining the end year: end_year = case sign( diff( date( concat_ws( year, start_month, start_day ) ), date( concat_ws( year, end_month, end_day ) ) ) ) when -1 then Start_Year + 1 else Start_Year end case Then wrap that expression (once syntactically correct) inside of another date, followed by BETWEEN statement. Update 3 To clear up some confusion: there is no end year. The end year must be calculated. Thank you!

    Read the article

  • Return unordered list from hierarchical sql data

    - by Milan
    I have table with pageId, parentPageId, title columns. Is there a way to return unordered nested list using asp.net, cte, stored procedure, UDF... anything? Table looks like this: PageID ParentId Title 1 null Home 2 null Products 3 null Services 4 2 Category 1 5 2 Category 2 6 5 Subcategory 1 7 5 SubCategory 2 8 6 Third Level Category 1 ... Result should look like this: Home Products Category 1 SubCategory 1 Third Level Category 1 SubCategory 2 Category 2 Services Ideally, list should contain <a> tags as well, but I hope I can add it myself if I find a way to create <ul> list. EDIT 1: I thought that already there is a solution for this, but it seems that there isn't. I wanted to keep it simple as possible and to escape using ASP.NET menu at any cost, because it uses tables by default. Then I have to use CSS Adapters etc. Even if I decide to go down the "ASP.NET menu" route I was able to find only this approach: http://aspalliance.com/822 which uses DataAdapter and DataSet :( Any more modern or efficient way?

    Read the article

  • I am looking for an actual functional web browser control for .NET, maybe a C++ library

    - by Joshua
    I am trying to emulate a web browser in order to execute JavaScript code and then parse the DOM. The System.Windows.Forms.WebBrowser object does not give me the functionality I need. It let's me set the headers, but you cannot set the proxy or clear cookies. Well you can, but it is not ideal and messes with IE's settings. I've been extending the WebBrowser control pinvoking native windows functions so far, but it is really one hack on top of another. I can mess with the proxy and also clear cookies and such, but this control has its issues as I mentioned. I found something called WebKit .NET (http://webkitdotnet.sourceforge.net/), but I don't see support for setting proxies or cookie manipulation. Can someone recommend a c++/.NET/whatever library to do this: Basically tell me what I need to do to get an interface to similar this in .NET: // this should probably pause the current thread for the max timeout, // throw an exception on failure or return null w/e, VAGUELY similar to this string WebBrowserEmu::FetchBrowserParsedHtml(Uri url, WebProxy p, int timeoutSeconds, byte[] headers, byte[] postdata); void WebBrowserEmu::ClearCookies(); I am not responsible for my actions.

    Read the article

  • How do you parse the XDG/gnome/kde menu/desktop item structure in c++??

    - by Joe Soul-bringer
    I would like to parse the menu structure for Gnome Panels (the standard Gnome Desktop application launcher) and it's KDE equivalent using c/c++ function calls. That is, I'd like a list of what the base menu categories and submenu are installed in a given machine. I would like to do with using fairly simple c/c++ function calls (with NO shelling out please). I understand that these menus are in the standard xdg format. I understand that this menu structure is stored in xml files such as: /home/user/.config/menus/applications.menu I've look here: http://www.freedesktop.org/wiki/Specifications/menu-spec?action=show&redirect=Standards%2Fmenu-spec but all they offer is the standard and some shell files to insert item entries (I don't want shell scripts, I don't want installation, I definitely don't want to create a c-library from the XDG specification. I want to find the existing menu structure). I've looked here: http://library.gnome.org/admin/system-admin-guide/stable/menustructure-13.html.en for more notes on these structures. None of this gives me a good idea of how determine the menu structures using a c/c++ program. The actual gnome menu structures seem to be a horrifically hairy things - they don't seem to show the menu structure but to give an XML-coded description of all the changes that the menus have gone through since installation. I assume gnome panels parses these file so there's a function buried somewhere to do this but I've yet to find where that function is after scanning library.gnome.org for a couple of days. I've scanned the Nautilus source code as well but Panels seem to exist elsewhere or are burried well. Thanks in advance

    Read the article

  • Works in Firefox & Opera, but not IE 8

    - by Ai Pragma
    1) This issue involves just one html webpage, lets call it "ajax.html".2) I have AJAX functions in this webpage that work in both Firefox and IE8.3) I now attempt generating just the option values of a dropdown list of dates using my ajax functions, and it works in Firefox & Opera, but not IE8.4) The surrounding html code for the dropdown looks like this:<select name="entry_7_single" id="entry_7" onChange="Ajax_PhpResultsWithVar('./secure/db/SummaryCls.php','entry_8','dateval',this.value)"></select>The onchange call refers to an ajax function that successfully(both Firefox & IE8) populates a textarea(entry_8) with a description of an event associated with the date selected in this dropdown. 5) An onload call initiates the ajax function to generate the dropdown list values:<body class="ss-base-body" onLoad="OnLoadWebPage()">6) The js script that calls the ajax function is as follows:function OnLoadWebPage(){    Ajax_PhpResults('./secure/db/GenDateListCls.php','entry_7');}7) Since it works in Firefox, but not IE8, I throw the output of the ajax function into a Firefox large textbox and I get the following:<option selected value="8 JUN 2010">8 JUN 2010</option>                   <option value="9 JUN 2010">9 JUN 2010</option>                   <option value="10 JUN 2010">10 JUN 2010</option>                   <option value="11 JUN 2010">11 JUN 2010</option> 8 ) There are over a hundred generated but you get the gist of what the ajax function generates. Next I will list the PHP function that outputs the above dropdown values://///////////////////////////////////////////////////////////////////////////////////////////////////////<?phpinclude_once 'SPSQLite.class.php';include_once 'misc_funcs.php';class GenDateListCls {    var $dbName;    var $sqlite;        function GenDateListCls()    {        $this->dbName = 'accrsc.db';        $this->ConstructEventDates();    }        function ConstructEventDates()    {         $this->sqlite = new SPSQLite($this->dbName);         $todayarr = getdate();         $today = $todayarr[mday] . " " . substr($todayarr[month],0,3) . " " . $todayarr[year];                  $ICalDate = ChangeToICalDate($today);         $dateQuery = "SELECT dtstart from events where substr(dtstart,1,8) >= '" . $ICalDate . "';";         $this->sqlite->query($dateQuery);         $datesResult = $this->sqlite->returnRows();                      foreach (array_reverse($datesResult) as $indx => $row)         {                       $normDate = NormalizeICalDate(substr($row[dtstart],0,8));              if ($indx==0)              { ?>                 <option selected value=<?php echo('"' . $normDate . '"'); ?>><?php echo $normDate; ?></option><?php                               }                          else              {?>                  <option value=<?php echo('"' . $normDate . '"'); ?>><?php echo $normDate; ?></option><?php                                   }                       }                   $this->sqlite->close();     }}$dateList = new GenDateListCls();    ?>/////////////////////////////////////////////////////////////////////////////////////////////////////////////<<< I appreciate any assistance on this matter. Aipragma >>> My Background: To let you all know, I am a complete newbie to PHP, Ajax, & javascript, and learning it all on my own, no classes. My background is in Linux, Windows, C++, Java, VB,VBA,MS XML, & some html.

    Read the article

  • PHP Resize image down and crop using imagemagick

    - by mr12086
    I'm trying to downsize image uploaded by users at the time of upload. This is no problem but I want to get specific with sizes now. I'm looking for some advice on an algorithm im struggling to produce that will take any shape image - square or rectangle of any widths/heights and downsize it. This image needs to be downsized to a target box size (this changes but is stored in a variable).. So it needs to downsize the image so that both the width and height are larger than the width and height of the target maintaining aspect ratio. but only just.. The target size will be used to crop the image so there is no white space around the edges etc. I'm looking for just the algorithm behind creating the correct resize based on different dimension images - I can handle the cropping, even resizing in most cases but it fails in a few so i'm reaching out. I'm not really asking for PHP code more pseudo. Either is fine obviously. Thanks Kindly.

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • Why is my .htaccess file redirecting to full server path instead of relative path?

    - by death.au
    I've never had a problem with cakePHP before, but something's odd about this server and is causing the redirects in the .htaccess files to behave oddly. CakePHP uses mod_rewrite in .htaccess files to redirect requests to its own webroot folder. The problem is that the redirects are listing the wrong path and causing a 404 error. My CakePHP application, which is stored in the listings directory, has a .htaccess file as follows: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [R=301,L] RewriteRule (.*) app/webroot/$1 [R=301,L] </IfModule> (*note that the R=301 causes an external redirect so we can see what is going on from our end. It should really omit this flag and do the redirect internally, transparent to end-users) This is supposed to redirect any request from http://hostname.com/~username/listings/ to http://hostname.com/~username/listings/app/webroot/ However, rather than simply adding “app/webroot/” to the end as it is supposed to, it is adding the full server path ( /home/username/public_html/listings/app/webroot/ ) resulting in the final URL http://hostname.com/home/username/public_html/listings/app/webroot/ which is obviously incorrect and triggers a 404 error. The hosting is on a shared hosting account, so that limits what I can do with the settings. I've never seen this happen before, and I'm thinking it's something wrong from the hosting side of things, but if anyone has some helpful suggestions then I can put them to the hosting company as well.

    Read the article

  • Xerces C++ SAX Parsing Problem: expected class-name before '{' token

    - by aduric
    I'm trying to run through an example given for the C++ Xerces XML library implementation. I've copied the code exactly, but I'm having trouble compiling it. error: expected class-name before '{' token I've looked around for a solution, and I know that this error can be caused by circular includes or not defining a class before it is used, but as you can see from the code, I only have 2 files: MySAXHandler.hpp and MySAXHandler.cpp. However, the MySAXHandler class is derived from HandlerBase, which is included. MyHandler.hpp #include <xercesc/sax/HandlerBase.hpp> class MySAXHandler : public HandlerBase { public: void startElement(const XMLCh* const, AttributeList&); void fatalError(const SAXParseException&); }; MySAXHandler.cpp #include "MySAXHandler.hpp" #include <iostream> using namespace std; MySAXHandler::MySAXHandler() { } void MySAXHandler::startElement(const XMLCh* const name, AttributeList& attributes) { char* message = XMLString::transcode(name); cout << "I saw element: "<< message << endl; XMLString::release(&message); } void MySAXHandler::fatalError(const SAXParseException& exception) { char* message = XMLString::transcode(exception.getMessage()); cout << "Fatal Error: " << message << " at line: " << exception.getLineNumber() << endl; XMLString::release(&message); } I'm compiling like so: g++ -L/usr/local/lib -lxerces-c -I/usr/local/include -c MySAXHandler.cpp I've looked through the HandlerBase and it is defined, so I don't know why I can't derive a class from it? Do I have to override all the virtual functions in HandlerBase? I'm kinda new to C++. Thanks in advance.

    Read the article

  • Oracle DBMS_PROFILER only shows Anonymous in the results tables

    - by Greg Reynolds
    I am new to DBMS_PROFILER. All the examples I have seen use a simple top-level procedure to demonstrate the use of the profiler, and from there get all the line numbers etc. I deploy all code in packages, and I am having great difficulty getting my profile session to populate the plsql_profiler_units with useful data. Most of my runs look like this: RUNID RUN_COMMENT UNIT_OWNER UNIT_NAME SECS PERCEN ----- ----------- ----------- -------------- ------- ------ 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler I have just embedded the calls to the dbms_profiler.start_profiler, flush_data and stop_profiler as per all the examples. The main difference is that my code is in a package, and calls in to other package. Do you have to profile every single stored procedure in your call stack? If so that makes this tool a little useless! I have checked http://www.dba-oracle.com/t_plsql_dbms_profiler.htm for hints, among other similar sites.

    Read the article

  • Website does not automatically fit to iphone screen

    - by Ploetzeneder
    Hello, The following code does not fit onto the iphone screen; how do I have to define the viewport? <html> <body> <center> <div id="karteu" style="background: url('../customer/Karten/karte1.jpg') no-repeat left center;width:714px;height:540px;" > </div> </body> </html> Normally the site should be zoomed, so i first should see the website in small, and then be able to zoom that i see it in the original size, but in my case it does not, when i call the site, the zoom is, that the image has this original size already, and that i have to scroll, but i dont want to scroll,...i want to use the normal safari mobile zoom and then scroll The solution at the bottom does not zoom anything. I want to see the overview of the image at the beginning. Then i want to be able to zoom with the normal safari zoom functions,..

    Read the article

  • Returning objects from another thread?

    - by Mark
    Trying to follow the hints laid out here, but she doesn't mention how to handle it when your collection needs to return a value, like so: private delegate TValue DequeueHandler(); public virtual TValue Dequeue() { if (dispatcher.CheckAccess()) { --count; var pair = dict.First(); var queue = pair.Value; var val = queue.Dequeue(); if (queue.Count == 0) dict.Remove(pair.Key); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, val)); return val; } else { dispatcher.BeginInvoke(new DequeueHandler(Dequeue)); } } This obviously won't work, because dispatcher.BeginInvoke doesn't return anything. What am I supposed to do? Or maybe I could replace dequeue with two functions, Peek and Pop, where Peek doesn't really need to be on the UI thread because it doesn't modify anything, right? As a side question, these methods don't need to be "locked" either, do they? If they're all forced to run in the UI thread, then there shouldn't be any concurrency issues, right?

    Read the article

  • What are the fastest-performing options for a read-only, unordered collection of unique strings?

    - by Dan Tao
    Disclaimer: I realize the totally obvious answer to this question is HashSet<string>. It is absurdly fast, it is unordered, and its values are unique. But I'm just wondering, because HashSet<T> is a mutable class, so it has Add, Remove, etc.; and so I am not sure if the underlying data structure that makes these operations possible makes certain performance sacrifices when it comes to read operations -- in particular, I'm concerned with Contains. Basically, I'm wondering what are the absolute fastest-performing data structures in existence that can supply a Contains method for objects of type string. Within or outside of the .NET framework itself. I'm interested in all kinds of answers, regardless of their limitations. For example I can imagine that some structure might be restricted to strings of a certain length, or may be optimized depending on the problem domain (e.g., range of possible input values), etc. If it exists, I want to hear about it. One last thing: I'm not restricting this to read-only data structures. Obviously any read-write data structure could be embedded inside a read-only wrapper. The only reason I even mentioned the word "read-only" is that I don't have any requirement for a data structure to allow adding, removing, etc. If it has those functions, though, I won't complain.

    Read the article

  • Designing a class in such a way that it doesn't become a "God object"

    - by devoured elysium
    I'm designing an application that will allow me to draw some functions on a graphic. Each function will be drawn from a set of points that I will pass to this graphic class. There are different kinds of points, all inheriting from a MyPoint class. For some kind of points it will be just printing them on the screen as they are, others can be ignored, others added, so there is some kind of logic associated to them that can get complex. How to actually draw the graphic is not the main issue here. What bothers me is how to make the code logic such that this GraphicMaker class doesn't become the so called God-Object. It would be easy to make something like this: class GraphicMaker { ArrayList<Point> points = new ArrayList<Point>(); public void AddPoint(Point point) { points.add(point); } public void DoDrawing() { foreach (Point point in points) { if (point is PointA) { //some logic here else if (point is PointXYZ) { //...etc } } } } How would you do something like this? I have a feeling the correct way would be to put the drawing logic on each Point object (so each child class from Point would know how to draw itself) but two problems arise: There will be kinds of points that need to know all the other points that exist in the GraphicObject class to know how to draw themselves. I can make a lot of the methods/properties from the Graphic class public, so that all the points have a reference to the Graphic class and can make all their logic as they want, but isn't that a big price to pay for not wanting to have a God class?

    Read the article

  • How to upload files and store them in a server local path when MS SQL SERVER allows remote connectio

    - by user193655
    I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277). Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \FILESERVER\MyApplicationDocuments\45.zip). Of course \FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections). So I need a way to access \FILESERVER even if of course I cannot see it in LAN. I found the following T-SQL code snippet that is perfect for the "download trick": SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X With the code above I can download a file on the client. But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files. Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.

    Read the article

  • Question about WeakHashMap

    - by michael
    Hi, In the Javadoc of "http://java.sun.com/j2se/1.4.2/docs/api/java/util/WeakHashMap.html", it said "Each key object in a WeakHashMap is stored indirectly as the referent of a weak reference. Therefore a key will automatically be removed only after the weak references to it, both inside and outside of the map, have been cleared by the garbage collector." And then Note that a value object may refer indirectly to its key via the WeakHashMap itself; that is, a value object may strongly refer to some other key object whose associated value object, in turn, strongly refers to the key of the first value object. But should not both Key and Value should be used weak reference in WeakHashMap? i.e. if there is low on memory, GC will free the memory held by the value object (since the value object most likely take up more memory than key object in most cases)? And if GC free the Value object, the Key Object can be free as well? Basically, I am looking for a HashMap which will reduce memory usage when there is low memory (GC collects the value and key objects if necessary). Is it possible in Java? Thank you.

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >