Search Results

Search found 9128 results on 366 pages for 'big theta'.

Page 350/366 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Mysql 100% CPU + Slow query

    - by felipeclopes
    I'm using the RDS database from amazon with a some very big tables, and yesterday I started to face 100% CPU utilisation on the server and a bunch of slow query logs that were not happening before. I tried to check the queries that were running and faced this result from the explain command +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | businesses | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | activities_businesses | ref | PRIMARY,index_activities_users_on_business_id,index_tweets_users_on_tweet_id_and_business_id | index_activities_users_on_business_id | 9 | const | 2252 | Using index condition; Using where | | 1 | SIMPLE | activities_b_taggings_975e9c4 | ref | taggings_idx | taggings_idx | 782 | const,myapp_production.activities_businesses.id,const | 1 | Using index condition; Using where | | 1 | SIMPLE | activities | eq_ref | PRIMARY,index_activities_on_created_at | PRIMARY | 8 | myapp_production.activities_businesses.activity_id | 1 | Using where | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ Also checkin in the process list, I got something like this: +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | 1 | my_app | my_ip:57152 | my_app_production | Sleep | 0 | | NULL | | 2 | my_app | my_ip:57153 | my_app_production | Sleep | 2 | | NULL | | 3 | rdsadmin | localhost:49441 | NULL | Sleep | 9 | | NULL | | 6 | my_app | my_other_ip:47802 | my_app_production | Sleep | 242 | | NULL | | 7 | my_app | my_other_ip:47807 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 8 | my_app | my_other_ip:47809 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 9 | my_app | my_other_ip:47810 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 10 | my_app | my_other_ip:47811 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 11 | my_app | my_other_ip:47813 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | ... So based on the numbers, it looks like there is no reason to have a slow query, since the worst execution plan is the one that goes through 2k rows which is not much. Edit 1 Another information that might be useful is the slow query_log SET timestamp=1401457485; SELECT my_query... # User@Host: myapp[myapp] @ ip-10-195-55-233.ec2.internal [IP] Id: 435 # Query_time: 95.830497 Lock_time: 0.000178 Rows_sent: 0 Rows_examined: 1129387 Edit 2 After profiling, I got this result. The result have approximately 250 rows with two columns each. +----------------------+----------+ | state | duration | +----------------------+----------+ | Sending data | 272 | | removing tmp table | 0 | | optimizing | 0 | | Creating sort index | 0 | | init | 0 | | cleaning up | 0 | | executing | 0 | | checking permissions | 0 | | freeing items | 0 | | Creating tmp table | 0 | | query end | 0 | | statistics | 0 | | end | 0 | | System lock | 0 | | Opening tables | 0 | | logging slow query | 0 | | Sorting result | 0 | | starting | 0 | | closing tables | 0 | | preparing | 0 | +----------------------+----------+ Edit 3 Adding query as requested SELECT activities.share_count, activities.created_at FROM `activities_businesses` INNER JOIN `businesses` ON `businesses`.`id` = `activities_businesses`.`business_id` INNER JOIN `activities` ON `activities`.`id` = `activities_businesses`.`activity_id` JOIN taggings activities_b_taggings_975e9c4 ON activities_b_taggings_975e9c4.taggable_id = activities_businesses.id AND activities_b_taggings_975e9c4.taggable_type = 'ActivitiesBusiness' AND activities_b_taggings_975e9c4.tag_id = 104 AND activities_b_taggings_975e9c4.created_at >= '2014-04-30 13:36:44' WHERE ( businesses.id = 1 ) AND ( activities.created_at > '2014-04-30 13:36:44' ) AND ( activities.created_at < '2014-05-30 12:27:03' ) ORDER BY activities.created_at; Edit 4 There may be a chance that the indexes are not being applied due to difference in column type between the taggings and the activities_businesses, on the taggable_id column. mysql> SHOW COLUMNS FROM activities_businesses; +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | activity_id | bigint(20) | YES | MUL | NULL | | | business_id | bigint(20) | YES | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> SHOW COLUMNS FROM taggings; +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | tag_id | int(11) | YES | MUL | NULL | | | taggable_id | bigint(20) | YES | | NULL | | | taggable_type | varchar(255) | YES | | NULL | | | tagger_id | int(11) | YES | | NULL | | | tagger_type | varchar(255) | YES | | NULL | | | context | varchar(128) | YES | | NULL | | | created_at | datetime | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ So it is examining way more rows than it shows in the explain query, probably because some indexes are not being applied. Do you guys can help m with that?

    Read the article

  • Boost::Spirit::Qi autorules -- avoiding repeated copying of AST data structures

    - by phooji
    I've been using Qi and Karma to do some processing on several small languages. Most of the grammars are pretty small (20-40 rules). I've been able to use autorules almost exclusively, so my parse trees consist entirely of variants, structs, and std::vectors. This setup works great for the common case: 1) parse something (Qi), 2) make minor manipulations to the parse tree (visitor), and 3) output something (Karma). However, I'm concerned about what will happen if I want to make complex structural changes to a syntax tree, like moving big subtrees around. Consider the following toy example: A grammar for s-expr-style logical expressions that uses autorules... // Inside grammar class; rule names match struct names... pexpr %= pand | por | var | bconst; pand %= lit("(and ") >> (pexpr % lit(" ")) >> ")"; por %= lit("(or ") >> (pexpr % lit(" ")) >> ")"; pnot %= lit("(not ") >> pexpr >> ")"; ... which leads to parse tree representation that looks like this... struct var { std::string name; }; struct bconst { bool val; }; struct pand; struct por; struct pnot; typedef boost::variant<bconst, var, boost::recursive_wrapper<pand>, boost::recursive_wrapper<por>, boost::recursive_wrapper<pnot> > pexpr; struct pand { std::vector<pexpr> operands; }; struct por { std::vector<pexpr> operands; }; struct pnot { pexpr victim; }; // Many Fusion Macros here Suppose I have a parse tree that looks something like this: pand / ... \ por por / \ / \ var var var var (The ellipsis means 'many more children of similar shape for pand.') Now, suppose that I want negate each of the por nodes, so that the end result is: pand / ... \ pnot pnot | | por por / \ / \ var var var var The direct approach would be, for each por subtree: - create pnot node (copies por in construction); - re-assign the appropriate vector slot in the pand node (copies pnot node and its por subtree). Alternatively, I could construct a separate vector, and then replace (swap) the pand vector wholesale, eliminating a second round of copying. All of this seems cumbersome compared to a pointer-based tree representation, which would allow for the pnot nodes to be inserted without any copying of existing nodes. My question: Is there a way to avoid copy-heavy tree manipulations with autorule-compliant data structures? Should I bite the bullet and just use non-autorules to build a pointer-based AST (e.g., http://boost-spirit.com/home/2010/03/11/s-expressions-and-variants/)?

    Read the article

  • parallel computation for an Iterator of elements in Java

    - by Brian Harris
    I've had the same need a few times now and wanted to get other thoughts on the right way to structure a solution. The need is to perform some operation on many elements on many threads without needing to have all elements in memory at once, just the ones under computation. As in, Iterables.partition is insufficient because it brings all elements into memory up front. Expressing it in code, I want to write a BulkCalc2 that does the same thing as BulkCalc1, just in parallel. Below is sample code that illustrates my best attempt. I'm not satisfied because it's big and ugly, but it does seem to accomplish my goals of keeping threads highly utilized until the work is done, propagating any exceptions during computation, and not having more than numThreads instances of BigThing necessarily in memory at once. I'll accept the answer which meets the stated goals in the most concise way, whether it's a way to improve my BulkCalc2 or a completely different solution. interface BigThing { int getId(); String getString(); } class Calc { // somewhat expensive computation double calc(BigThing bigThing) { Random r = new Random(bigThing.getString().hashCode()); double d = 0; for (int i = 0; i < 100000; i++) { d += r.nextDouble(); } return d; } } class BulkCalc1 { final Calc calc; public BulkCalc1(Calc calc) { this.calc = calc; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { TreeMap<Integer, Double> results = Maps.newTreeMap(); while (in.hasNext()) { BigThing o = in.next(); results.put(o.getId(), calc.calc(o)); } return results; } } class SafeIterator<T> { final Iterator<T> in; SafeIterator(Iterator<T> in) { this.in = in; } synchronized T nextOrNull() { if (in.hasNext()) { return in.next(); } return null; } } class BulkCalc2 { final Calc calc; final int numThreads; public BulkCalc2(Calc calc, int numThreads) { this.calc = calc; this.numThreads = numThreads; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { ExecutorService e = Executors.newFixedThreadPool(numThreads); List<Future<?>> futures = Lists.newLinkedList(); final Map<Integer, Double> results = new MapMaker().concurrencyLevel(numThreads).makeMap(); final SafeIterator<BigThing> it = new SafeIterator<BigThing>(in); for (int i = 0; i < numThreads; i++) { futures.add(e.submit(new Runnable() { @Override public void run() { while (true) { BigThing o = it.nextOrNull(); if (o == null) { return; } results.put(o.getId(), calc.calc(o)); } } })); } e.shutdown(); for (Future<?> future : futures) { try { future.get(); } catch (InterruptedException ex) { // swallowing is OK } catch (ExecutionException ex) { throw Throwables.propagate(ex.getCause()); } } return new TreeMap<Integer, Double>(results); } }

    Read the article

  • UVA Online Judge 3n+1 : Right answer is Wrong answer

    - by Samuraisoulification
    Ive been toying with this problem for more than a week now, I have optimized it a lot, I seem to be getting the right answer, since it's the same as when I compare it to other's answers that got accepted, but I keep getting wrong answer. Im not sure what's going on! Anyone have any advice? I think it's a problem with the input or the output, cause Im not exactly sure how this judge thing works. So if anyone could pinpoint the problem, and also give me any advice on my code, Id be very appreciative!!! #include <iostream> #include <cstdlib> #include <stdio.h> #include <vector> using namespace std; class Node{ // node for each number that has teh cycles and number private: int number; int cycles; bool cycleset; // so it knows whether to re-set the cycle public: Node(int num){ number = num; cycles = 0; cycleset = false; } int getnumber(){ return number; } int getcycles(){ return cycles; } void setnumber(int num){ number = num; } void setcycles(int num){ cycles = num; cycleset = true; } bool cycled(){ return cycleset; } }; class Cycler{ private: vector<Node> cycleArray; int biggest; int cycleReal(unsigned int number){ // actually cycles through the number int cycles = 1; if (number != 1) { if (number < 1000000) { // makes sure it's in vector bounds if (!cycleArray[number].cycled()) { // sees if it's been cycled if (number % 2 == 0) { cycles += this->cycleReal((number / 2)); } else { cycles += this->cycleReal((3 * number) + 1); } } else { // if cycled get the number of cycles and don't re-calculate, ends recursion cycles = cycleArray[number].getcycles(); } } else { // continues recursing if it's too big for the vector if (number % 2 == 0) { cycles += this->cycleReal((number / 2)); } else { cycles += this->cycleReal((3 * number) + 1); } } } if(number < 1000000){ // sets cycles table for the number in the vector if (!cycleArray[number].cycled()) { cycleArray[number].setcycles(cycles); } } return cycles; } public: Cycler(){ biggest = 0; for(int i = 0; i < 1000000; i++){ // initialize the vector, set the numbers Node temp(i); cycleArray.push_back(temp); } } int cycle(int start, int end){ // cycles thorugh the inputted numbers. int size = 0; for(int i = start; i < end ; i++){ size = this->cycleReal(i); if(size > biggest){ biggest = size; } } int temp = biggest; biggest = 0; return temp; } int getBiggest(){ return biggest; } }; int main() { Cycler testCycler; int i, j; while(cin>>i>>j){ //read in untill \n int biggest = 0; if(i > j){ biggest = testCycler.cycle(j, i); }else{ biggest = testCycler.cycle(i, j); } cout << i << " " << j << " " << biggest << endl; } return 0; }

    Read the article

  • Smooth Div Scroll jquery not scrolling

    - by Razor
    The Smooth Div Scroll is great but for some reason the area no longer scrolls when I edit or remove the #makeMeScrollable or #makeMeScrollable div.scrollableArea * When I leave it as is it works. Which is a problem for customization. and it won't work after I take the "*" out of div.scrollableArea * If I edit the part with the It's been frustrating figuring out why that part which is supposed to be editable not work at all. Any help with this jquery would be helpful! Thanks in advance! /* You can alter this CSS in order to give SmoothDivScroll your own look'n'feel */ /* Invisible left hotspot */ div.scrollingHotSpotLeft { /* The hotspots have a minimum width of 75 pixels and if there is room the will grow and occupy 10% of the scrollable area (20% combined). Adjust it to your own taste. */ min-width: 75px; width: 10%; height: 100%; /* There is a big background image and it's used to solve some problems I experienced in Internet Explorer 6. */ background-image: url(../images/big_transparent.gif); background-repeat: repeat; background-position: center center; position: absolute; z-index: 200; left: 0; /* The first cursor url is for Firefox and other browsers, the second is for Internet Explorer */ cursor: url(../images/cursors/cursor_arrow_left.cur), url(images/cursors/cursor_arrow_left.cur),w-resize; } /* Visible left hotspot */ div.scrollingHotSpotLeftVisible { background-image: url(../images/arrow_left.gif); background-color: #fff; background-repeat: no-repeat; /* Standard CSS3 opacity setting */ opacity: 0.35; /* Opacity for really old versions of Mozilla Firefox (0.9 or older) */ -moz-opacity: 0.35; /* Opacity for Internet Explorer. */ filter: alpha(opacity = 35); /* Use zoom to Trigger "hasLayout" in Internet Explorer 6 or older versions */ zoom: 1; } /* Invisible right hotspot */ div.scrollingHotSpotRight { min-width: 75px; width: 10%; height: 100%; background-image: url(../images/big_transparent.gif); background-repeat: repeat; background-position: center center; position: absolute; z-index: 200; right: 0; cursor: url(../images/cursors/cursor_arrow_right.cur), url(images/cursors/cursor_arrow_right.cur),e-resize; } /* Visible right hotspot */ div.scrollingHotSpotRightVisible { background-image: url(../images/arrow_right.gif); background-color: #fff; background-repeat: no-repeat; opacity: 0.35; filter: alpha(opacity = 35); -moz-opacity: 0.35; zoom: 1; } /* The scroll wrapper is always the same width and height as the containing element (div). Overflow is hidden because you don't want to show all of the scrollable area. */ div.scrollWrapper { position: relative; overflow: hidden; width: 100%; height: 100%; } div.scrollableArea { position: relative; width: auto; height: 100%; } #makeMeScrollable { width:100%; height: 330px; position: relative; } #makeMeScrollable div.scrollableArea * { position: relative; float: left; margin: 0; padding: 0; } http://www.smoothdivscroll.com/ //^above link to the jquery I am talking about

    Read the article

  • File Upload drops with no reason

    - by sufoid
    Hallo I want to make an file upload. The script should take the image, resize it and upload it. But it seems that there is any unknown to me error in the upload. Here the code define ("MAX_SIZE","2000"); // maximum size for uploaded images define ("WIDTH","107"); // width of thumbnail define ("HEIGHT","107"); // alternative height of thumbnail (portrait 107x80) define ("WIDTH2","600"); // width of (compressed) photo define ("HEIGHT2","600"); // alternative height of (compressed) photo (portrait 600x450) if (isset($_POST['Submit'])) { // iterate thorugh all upload fields foreach ($_FILES as $key => $value) { //read name of user-file $image = $_FILES[$key]['name']; // if it is not empty if ($image) { $filename = stripslashes($_FILES[$key]['name']); // get original name of file from clients machine $extension = getExtension($filename); // get extension of file in lower case format $extension = strtolower($extension); // if extension not known, output error // otherwise continue if (($extension != "jpg") && ($extension != "jpeg") && ($extension != "png") && ($extension != "gif")) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Unbekannter Dateityp: Es können nur Dateien vom Typ .gif, .jpg oder .png hochgeladen werden.</div>'; } else { // get size of image in bytes // $_FILES[\'image\'][\'tmp_name\'] >> temporary filename of file in which the uploaded file was stored on server $size = getimagesize($_FILES[$key]['tmp_name']); $sizekb = filesize($_FILES[$key]['tmp_name']); // if image size exceeds defined maximum size, output error // otherwise continue if ($sizekb > MAX_SIZE*1024) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden: die Dateigröße überschreitet das Limit von 2MB.</div>'; } else { $rand = md5(rand() * time()); // create random file name $image_name = $rand.'.'.$extension; // unique name (random number) // new name contains full path of storage location (images folder) $consname = "photos/".$image_name; // path to big image $consname2 = "photos/thumbs/".$image_name; // path to thumbnail $copied = copy($_FILES[$key]['tmp_name'], $consname); $copied = copy($_FILES[$key]['tmp_name'], $consname2); $sql="INSERT INTO photos (galery_id, photo, thumb) VALUES (". $id .", '$consname', '$consname2')" or die(mysql_error()); $query = mysql_query($sql) or die(mysql_error()); // if image hasnt been uploaded successfully, output error // otherwise continue if (!$copied) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden.</div>'; } else { $thumb_name = $consname2; // path for thumbnail for creation & storage // call to function: create thumbnail // parameters: image name, thumbnail name, specified width and height $thumb = make_thumb($consname,$thumb_name,WIDTH,HEIGHT); $thumb = make_thumb($consname,$consname,WIDTH2,HEIGHT2); } } } } } // current image could be uploaded successfully echo '<div class="success">'. $success .' Foto(s) erfolgreich hochgeladen!</div>'; showForm(); // call to function: create upload form }

    Read the article

  • What is the best way to solve an Objective-C namespace collision?

    - by Mecki
    Objective-C has no namespaces; it's much like C, everything is within one global namespace. Common practice is to prefix classes with initials, e.g. if you are working at IBM, you could prefix them with "IBM"; if you work for Microsoft, you could use "MS"; and so on. Sometimes the initials refer to the project, e.g. Adium prefixes classes with "AI" (as there is no company behind it of that you could take the initials). Apple prefixes classes with NS and says this prefix is reserved for Apple only. So far so well. But appending 2 to 4 letters to a class name in front is a very, very limited namespace. E.g. MS or AI could have an entirely different meanings (AI could be Artificial Intelligence for example) and some other developer might decide to use them and create an equally named class. Bang, namespace collision. Okay, if this is a collision between one of your own classes and one of an external framework you are using, you can easily change the naming of your class, no big deal. But what if you use two external frameworks, both frameworks that you don't have the source to and that you can't change? Your application links with both of them and you get name conflicts. How would you go about solving these? What is the best way to work around them in such a way that you can still use both classes? In C you can work around these by not linking directly to the library, instead you load the library at runtime, using dlopen(), then find the symbol you are looking for using dlsym() and assign it to a global symbol (that you can name any way you like) and then access it through this global symbol. E.g. if you have a conflict because some C library has a function named open(), you could define a variable named myOpen and have it point to the open() function of the library, thus when you want to use the system open(), you just use open() and when you want to use the other one, you access it via the myOpen identifier. Is something similar possible in Objective-C and if not, is there any other clever, tricky solution you can use resolve namespace conflicts? Any ideas? Update: Just to clarify this: answers that suggest how to avoid namespace collisions in advance or how to create a better namespace are certainly welcome; however, I will not accept them as the answer since they don't solve my problem. I have two libraries and their class names collide. I can't change them; I don't have the source of either one. The collision is already there and tips on how it could have been avoided in advance won't help anymore. I can forward them to the developers of these frameworks and hope they choose a better namespace in the future, but for the time being I'm searching a solution to work with the frameworks right now within a single application. Any solutions to make this possible?

    Read the article

  • Future of VB.NET? [closed]

    - by Alex Yeung
    Hi all, I worked with C# for years. Last year, I changed my job and the company use VB.NET. Of course, theoretically C# and VB.NET are very similar and I easily adapted. However, I have worked for VB.NET for 1 year. I cannot see any future of VB.NET. As a programming language, it is so foo. Here is a list of what C# can do but VB.NET cannot Case insensitive variables how to think a new variable name? If I have a property called FolderPath, i need to establish another private variable called _folderPath or m_folderPath. In C#, FolderPath and folderPath are two variables. Moreover, it gets compile error if variable name is same as a class name. For example Dim guid = Guid.NewGuid(). (What the...) Again, I need to think a new variable name. Adhoc scope In C#, we could use {...} to create a adhoc scope and all resource in {...} will not affect the code outside. However there is not such syntax in VB.NET. I could only use If True Then and End If to make a local scope which is so unclear. In-Method Region Sometime, it is unavoidable to have a long long method. VB.NET does not support in-method region. I always need to scroll down for 1000 lines. It wastes my time. No multi-line string definition In C#, we could var s = @"..." to define a multi-line string. In VB.NET there is no direct method to do that. The indirect way is use XML-literal string. Dim s = <![CDATA[...]]>.Value. However it is unclear. No block comment In C#, we have line comment // and block comment /* ... */. However in VB.NET we only have line comment which is a very big trouble for me. No statement end symbol Statements are separated by line break in VB.NET; while statements are separated by ; in C#. Underscore I think many people know underscore _ is continue statement symbol. I really disagree with that. I know MS VB.NET language team is going to remove the underscore syntax from VB.NET. However what can we do now? Although underscore is removed in the future, what's the advantage of that? I cannot see any advantage! With scope With scope is an evil scope. Although it allows shorter statement, it is hard to trace. Default Namesapce in project level It is a nightmare for me. The only advantage of VB.NET is property initialization. I think C# cannot do that (correct me if I am wrong.) Public Property ThisIsMyProperty As String = "MyValue" Remarks: I don't think optional method parameters is an advantage of OOP. By those disadvantages, I cannot see the future of VB.NET. Anyone sees the future of VB.NET?

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Very different font sizes across browsers

    - by Yang
    Chrome/WebKit and Firefox have different rendering engines which render fonts differently, in particular with differing dimensions. This isn't too surprising, but what's surprising is the magnitude of some of the differences. I can always tweak individual elements on a page to be more similar, but that's tedious, to say the least. I've been searching for more systematic solutions, but many resources (e.g. SO answers) simply say "use a reset package." While I'm sure this fixes a bunch of other things like padding and spacing, it doesn't seem to make any difference for font dimensions. For instance, if I take the reset package from http://html5reset.org/, I can show pretty big differences (note the layout dimensions shown in the inspectors). [The images below are actually higher res than shown/resized in this answer.] <h1 style="font-size:64px; background-color: #eee;">Article Header</h1> With Helvetica, Chrome is has the shorter height instead. <h1 style="font-size:64px; background-color: #eee; font-family: Helvetica">Article Header</h1> Using a different font, Chrome again renders a much taller font, but additionally the letter spacing goes haywire (probably due to the boldification of the font): <style> @font-face { font-family: "MyriadProRegular"; src: url("fonts/myriadpro-regular-webfont.eot"); src: local("?"), url("fonts/myriadpro-regular-webfont.woff") format("woff"), url("fonts/myriadpro-regular-webfont.ttf") format("truetype"), url("fonts/myriadpro-regular-webfont.svg#webfonteknRmz0m") format("svg"); font-weight: normal; font-style: normal; } @font-face { font-family: "MyriadProLight"; src: url("fonts/myriadpro-light-webfont.eot"); src: local("?"), url("fonts/myriadpro-light-webfont.woff") format("woff"), url("fonts/myriadpro-light-webfont.ttf") format("truetype"), url("fonts/myriadpro-light-webfont.svg#webfont2SBUkD9p") format("svg"); font-weight: normal; font-style: normal; } @font-face { font-family: "MyriadProSemibold"; src: url("fonts/myriadpro-semibold-webfont.eot"); src: local("?"), url("fonts/myriadpro-semibold-webfont.woff") format("woff"), url("fonts/myriadpro-semibold-webfont.ttf") format("truetype"), url("fonts/myriadpro-semibold-webfont.svg#webfontM3ufnW4Z") format("svg"); font-weight: normal; font-style: normal; } </style> ... <h1 style="font-size:64px; background-color: #eee; font-family: Helvetica">Article Header</h1> I've tried a few resets/normalize packages to no avail. I just wanted to confirm here that this is indeed a fact of life (even omitting the more glaring offenders like IE and mobile) and I'm not missing some super-awesome solution to this mess.

    Read the article

  • sql perfomance on new server

    - by Rapunzo
    My database is running on a pc (AMD Phenom x6, intel ssd disk, 8GB DDR3 RAM and windows 7 OS + sql server 2008 R2 sp3 ) and it started working hard, timeout problems and up to 30 seconds long queries after 200 mb of database And I also have an old server pc (IBM x-series 266: 72*3 15k rpm scsi discs with raid5, 4 gb ram and windows server 2003 + sql server 2008 R2 sp3 ) and same query start to give results in 100 seconds.. I tried query analyser tool for tuning my indexed. but not so much improvements. its a big dissapointment for me. because I thought even its an old server pc it should be more powerfull with 15k rpm discs with raid5. what should I do. do I need $10.000 new server to get a good performance for my sql server? cant I use that IBM server? Extra information: there is 50 sql users and its an ERP program. There is my query ALTER FUNCTION [dbo].[fnDispoTerbiye] ( ) RETURNS TABLE AS RETURN ( SELECT MD.dispoNo, SV.sevkNo, M1.musteriAdi AS musteri, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SUM(T.topMetre) AS toplamSevkMetre, MD.dispoMetresi, DT.gelisMetresi, ISNULL(DT.fire, 0) AS fire, SV.sevkTarihi, DT.gelisTarihi, SP.mamulTermin, SD.miktar AS siparisMiktari, M.musteriAdi AS boyahane, MD.akisNotu AS islemler, --dbo.fnAkisIslemleri(MD.dispoNo) DT.partiNo, DT.iplikBoyaId, B.tanimAd AS BoyaTuru, MAX(HD.hamEn) AS hamEn, MAX(HD.hamGramaj) AS hamGramaj, TS.mamulEn, TS.mamulGramaj, DT.atkiCekmesi, DT.cozguCekmesi, DT.fiyat, DV.dovizCins, DT.dovizId, (SELECT CASE WHEN DT.dovizId = 2 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 2 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 3 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 3 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 1 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 1 ORDER BY tarih DESC), 2) AS numeric(18, 2)) END AS Expr1) AS ToplamTLfiyat, DT.aciklama, MD.dispoNotu, SD.siparisId, SD.siparisDetayId, DT.sqlUserName, DT.kayitTarihi, O.orguAd, 'Çözgü=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 1) AS Expr1) + ')' + ' Atki=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 2) AS Expr1) + ')' AS iplikAciklama, DT.prosesOk, dbo.[fnYikamaTalimat](SP.siparisId) yikamaTalimati FROM tblDoviz AS DV WITH(NOLOCK) INNER JOIN tblDispoTerbiye AS DT WITH(NOLOCK) INNER JOIN tblTanimlar AS B WITH(NOLOCK) ON DT.iplikBoyaId = B.tanimId AND B.tanimTurId = 2 ON DV.id = DT.dovizId RIGHT OUTER JOIN tblMusteri AS M1 WITH(NOLOCK) INNER JOIN tblSiparisDetay AS SD WITH(NOLOCK) INNER JOIN tblDispo AS MD WITH(NOLOCK) ON SD.siparisDetayId = MD.siparisDetayId INNER JOIN tblTipTur AS TT WITH(NOLOCK) ON SD.tipTurId = TT.tipTurId INNER JOIN tblSiparis AS SP WITH(NOLOCK) ON SD.siparisId = SP.siparisId ON M1.musteriNo = SP.musteriNo INNER JOIN tblTip AS TP WITH(NOLOCK) ON SD.tipTurId = TP.tipTurId AND SD.tipNo = TP.tipNo AND SD.desenNo = TP.desen AND SD.varyantNo = TP.varyant INNER JOIN tblOrgu AS O WITH(NOLOCK) ON TP.orguId = O.orguId INNER JOIN tblMusteri AS M WITH(NOLOCK) INNER JOIN tblSevkiyat AS SV WITH(NOLOCK) ON M.musteriNo = SV.musteriNo INNER JOIN tblSevkDetay AS SVD WITH(NOLOCK) ON SV.sevkNo = SVD.sevkNo ON MD.mamulDispoHamSevkno = SV.sevkNo LEFT OUTER JOIN tblTop AS T WITH(NOLOCK) INNER JOIN tblDispo AS HD WITH(NOLOCK) ON T.dispoNo = HD.dispoNo AND T.dispoTuruId = HD.dispoTuruId ON SVD.dispoTuruId = T.dispoTuruId AND SVD.dispoNo = T.dispoNo AND SVD.topNo = T.topNo AND MD.siparisDetayId = HD.siparisDetayId ON DT.dispoTuruId = MD.dispoTuruId AND DT.dispoNo = MD.dispoNo LEFT OUTER JOIN tblDispoTerbiyeTest AS TS WITH(NOLOCK) ON DT.dispoTuruId = TS.dispoTuruId AND DT.dispoNo = TS.dispoNo --WHERE DT.gelisTarihi IS NULL -- OR DT.gelisTarihi > GETDATE()-30 GROUP BY MD.dispoNo, DT.partiNo, DT.iplikBoyaId, TS.mamulEn, TS.mamulGramaj, DT.gelisMetresi, DT.gelisTarihi, DT.atkiCekmesi, DT.cozguCekmesi, DT.fire, DT.fiyat, DT.aciklama, DT.sqlUserName, DT.kayitTarihi, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SD.siparisId, SD.siparisDetayId, B.tanimAd, M.musteriAdi, M.musteriAdi, M1.musteriAdi, O.orguAd, TP.iplikAciklama, SD.miktar, MD.dispoNotu, SP.mamulTermin, DT.dovizId, DV.dovizCins, MD.dispoMetresi, MD.akisNotu, SV.sevkNo, SV.sevkTarihi, DT.prosesOk,SP.siparisId )

    Read the article

  • Suggestions on how to map from Domain (ORM) objects to Data Transfer Objects (DTO)

    - by FryHard
    The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well! The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens! We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM). Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste. Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested. Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord. Example code: By @ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file. Basic details of example: The Capture Form The capture form has a reference to the contoller private CompanyCaptureController MyController { get; set; } On initialise of the form it calls MyController.Load() private void InitForm () { MyController = new CompanyCaptureController(this); MyController.Load(); } This will return back to a method called LoadComplete() public void LoadCompleted (Company loadCompany) { _context.Post(delegate { CurrentItem = loadCompany; bindingSource.DataSource = CurrentItem; bindingSource.ResetCurrentItem(); //TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone. grdEmployees.DataSource = loadCompany.Employees; }, null); } } this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load. The Controller The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method. public void Load () { new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted); } The LoadCompany() method simply makes use of the Repository to find a know company. public Company LoadCompany() { return ActiveRecordRepository<Company>.Find(Setup.company.Identifier); } The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.

    Read the article

  • Anything wrong with this function for comparing floats?

    - by Michael Borgwardt
    When my Floating-Point Guide was yesterday published on slashdot, I got a lot of flak for my suggested comparison function, which was indeed inadequate. So I finally did the sensible thing and wrote a test suite to see whether I could get them all to pass. Here is my result so far. And I wonder if this is really as good as one can get with a generic (i.e. not application specific) float comparison function, or whether I still missed some edge cases. import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import org.junit.Test; public class NearlyEqualsTest { public static boolean nearlyEqual(float a, float b) { final float epsilon = 0.000001f; final float absA = Math.abs(a); final float absB = Math.abs(b); final float diff = Math.abs(a-b); if (a*b==0) { // a or b or both are zero // relative error is not meaningful here return diff < Float.MIN_VALUE / epsilon; } else { // use relative error return diff / (absA+absB) < epsilon; } } /** Regular large numbers - generally not problematic */ @Test public void big() { assertTrue(nearlyEqual(1000000f, 1000001f)); assertTrue(nearlyEqual(1000001f, 1000000f)); assertFalse(nearlyEqual(10000f, 10001f)); assertFalse(nearlyEqual(10001f, 10000f)); } /** Negative large numbers */ @Test public void bigNeg() { assertTrue(nearlyEqual(-1000000f, -1000001f)); assertTrue(nearlyEqual(-1000001f, -1000000f)); assertFalse(nearlyEqual(-10000f, -10001f)); assertFalse(nearlyEqual(-10001f, -10000f)); } /** Numbers around 1 */ @Test public void mid() { assertTrue(nearlyEqual(1.0000001f, 1.0000002f)); assertTrue(nearlyEqual(1.0000002f, 1.0000001f)); assertFalse(nearlyEqual(1.0002f, 1.0001f)); assertFalse(nearlyEqual(1.0001f, 1.0002f)); } /** Numbers around -1 */ @Test public void midNeg() { assertTrue(nearlyEqual(-1.000001f, -1.000002f)); assertTrue(nearlyEqual(-1.000002f, -1.000001f)); assertFalse(nearlyEqual(-1.0001f, -1.0002f)); assertFalse(nearlyEqual(-1.0002f, -1.0001f)); } /** Numbers between 1 and 0 */ @Test public void small() { assertTrue(nearlyEqual(0.000000001000001f, 0.000000001000002f)); assertTrue(nearlyEqual(0.000000001000002f, 0.000000001000001f)); assertFalse(nearlyEqual(0.000000000001002f, 0.000000000001001f)); assertFalse(nearlyEqual(0.000000000001001f, 0.000000000001002f)); } /** Numbers between -1 and 0 */ @Test public void smallNeg() { assertTrue(nearlyEqual(-0.000000001000001f, -0.000000001000002f)); assertTrue(nearlyEqual(-0.000000001000002f, -0.000000001000001f)); assertFalse(nearlyEqual(-0.000000000001002f, -0.000000000001001f)); assertFalse(nearlyEqual(-0.000000000001001f, -0.000000000001002f)); } /** Comparisons involving zero */ @Test public void zero() { assertTrue(nearlyEqual(0.0f, 0.0f)); assertFalse(nearlyEqual(0.00000001f, 0.0f)); assertFalse(nearlyEqual(0.0f, 0.00000001f)); } /** Comparisons of numbers on opposite sides of 0 */ @Test public void opposite() { assertFalse(nearlyEqual(1.000000001f, -1.0f)); assertFalse(nearlyEqual(-1.0f, 1.000000001f)); assertFalse(nearlyEqual(-1.000000001f, 1.0f)); assertFalse(nearlyEqual(1.0f, -1.000000001f)); assertTrue(nearlyEqual(10000f*Float.MIN_VALUE, -10000f*Float.MIN_VALUE)); } /** * The really tricky part - comparisons of numbers * very close to zero. */ @Test public void ulp() { assertTrue(nearlyEqual(Float.MIN_VALUE, -Float.MIN_VALUE)); assertTrue(nearlyEqual(-Float.MIN_VALUE, Float.MIN_VALUE)); assertTrue(nearlyEqual(Float.MIN_VALUE, 0)); assertTrue(nearlyEqual(0, Float.MIN_VALUE)); assertTrue(nearlyEqual(-Float.MIN_VALUE, 0)); assertTrue(nearlyEqual(0, -Float.MIN_VALUE)); assertFalse(nearlyEqual(0.000000001f, -Float.MIN_VALUE)); assertFalse(nearlyEqual(0.000000001f, Float.MIN_VALUE)); assertFalse(nearlyEqual(Float.MIN_VALUE, 0.000000001f)); assertFalse(nearlyEqual(-Float.MIN_VALUE, 0.000000001f)); assertFalse(nearlyEqual(1e20f*Float.MIN_VALUE, 0.0f)); assertFalse(nearlyEqual(0.0f, 1e20f*Float.MIN_VALUE)); assertFalse(nearlyEqual(1e20f*Float.MIN_VALUE, -1e20f*Float.MIN_VALUE)); } }

    Read the article

  • Data munging and data import scripting

    - by morpheous
    I need to write some scripts to carry out some tasks on my server (running Ubuntu server 8.04 TLS). The tasks are to be run periodically, so I will be running the scripts as cron jobs. I have divided the tasks into "group A" and "group B" - because (in my mind at least), they are a bit different. Task Group A import data from a file and possibly reformat it - by reformatting, I mean doing things like santizing the data, possibly normalizing it and or running calculations on 'columns' of the data Import the munged data into a database. For now, I am mostly using mySQL for the vast majority of imports - although some files will be imported into a sqlLite database. Note: The files will be mostly text files, although some of the files are in a binary format (my own proprietary format, written by a C++ application I developed). Task Group B Extract data from the database Perform calculations on the data and either insert or update tables in the database. My coding experience is is primarily as a C/C++ developer, although I have been using PHP as well for the last 2 years or so. I am from a windows background so I am still finding my feet in the linux environment. My question is this - I need to write scripts to perform the tasks I described above. Although I suppose I could write a few C++ applications to be used in the shell scripts, I think it may be better to write them in a scripting language (maybe this is a flawed assumption?). My thinking is that it would be easier to modify thins in a script - no need to rebuild etc for changes to functionality. Additionally, C++ data munging in C++ tends to involve more lines of code than "natural" scripting languages such as Perl, Python etc. Assuming that the majority of people on here agree that scripting is the way to go, herein lies my dilema. Which scripting language to use to perform the tasks above (giving my background). My gut instinct tells me that Perl (shudder) would be the most obvious choice for performing all of the above tasks. BUT (and that is a big BUT). The mere mention of Perl makes my toes curl, as I had a very, very bag experience with it a while back. The syntax seems quite unnatural to me - despite how many times I have tried to learn it - so if possible, I would really like to give it a miss. PHP (which I already know), also am not sure is a good candidate for scripting on the CLI (I have not seen many examples on how to do this etc - so I may be wrong). The last thing I must mention is that IF I have to learn a new language in order to do this, I cannot afford (time constraint) to spend more than a day, in learning the key commands/features required in order to do this (I can always learn the details of the language later, once I have actually deployed the scripts). So, which scripting language would you recommend (PHP, Python, Perl, [insert your favorite here]) - and most importantly WHY?. Or, should I just stick to writing little C++ applications that I call in a shell script?. Lastly, if you have suggested a scripting language, can you please show with a FEW lines (Perl mongers - I'm looking in your direction [nothing to cryptic!] ;) ) how I can use the language you suggested to do what I want to do. Hopefully, the lines you present will convince me that it can be done easily and elegantly in the language you suggested.

    Read the article

  • Custom Online Backup Solution Advice

    - by Martín Marconcini
    I have to implement a way so our customers can backup their SQL 2000/5/8 databasase online. The application they use is a C#/.NET35 Winforms application that connects to a SQL Server (can be 2000/2005/2008, sometimes express editions). The SQL Server is on the same LAN. Our application has a very specific UI and we must code each form following those guidelines. There’s lots of GDI+ to give it the look and feel we want. For that reason, using a 3rd party application is not a very good idea. We need to charge the customer on a monthly/annual basis for the service. Preferably, the customer doesn’t need to care about bandwidth and storage space. It must be transparent. Given the above reqs., my first thoughts are: Solution 1: Code some sort of FTP basic functionality with behind the scenes SQL Backup mechanism, then hire a Hosting service and compress-transfer the .BAK to the Hosting. Maintain a series of Folders (for each customer). They won’t see what’s happening. They will just see a list of their files and a big “Backup now” button that will perform the SQL backup, compress it and upload it (and update the file list) ;) Pros: Not very complicated to implement, simple to use, fairly simple to configure (could have a dedicated ftp user/pass) Cons: Finding a “ftp” only hosting plan is not probably going to be easy, they usually come with a bunch of stuff. FTP is not always the best protocol. more? Solution 2: Similar to 1, but instead of FTP, find a cloud computing service like Amazon S3, Mosso or similar. Pros: Cloud Storage is fast, reliable, etc. It’s kind of easy to implement (specially if there are APIs like AWS or Mosso). Cons: I have been unable to come up with a service optimized for resellers where I can give multiple sub-accounts (one for each customer). Billing is going to be a nightmare cuz these services bill per/GB and with One account it’s impossible to differentiate each customer. Solution 3: Similar to 2, but letting the user create their own account on Amazon S3 (for example). Pros: You forget about billing and such. Cons: A mess for the customer who has to open the Amazon (or whatever) account, will be charged for that and not from you. You can’t really charge the customer (since you’re just not doing anything). Solution 4: Use one of the many backup online solutions that use the tech in cloud storage. Pros: many of these include SQL Server backup, and a lot of features that we’d have to implement. Plus web access and stuff like that will come included. Cons: Still have the billing problem described in number 2. Little of these companies (if any) offers “reseller” accounts. You have to eventually use their software (some offer certain branding). Any better approach? Summary: You have a software (.NET Winapp). You want your users to be able to backup their SQL Server databases online (and be able to retrieve the backups if needed). You ideally would like to charge the customer for this service (i.e. XX € a year).

    Read the article

  • Nhibernate Migration from 1.0.2.0 to 2.1.2 and many-to-one save problems

    - by Meska
    Hi, we have an old, big asp.net application with nhibernate, which we are extending and upgrading some parts of it. NHibernate that was used was pretty old ( 1.0.2.0), so we decided to upgrade to ( 2.1.2) for the new features. HBM files are generated through custom template with MyGeneration. Everything went quite smoothly, except for one thing. Lets say we have to objects Blog and Post. Blog can have many posts, so Post will have many-to-one relationship. Due to the way that this application operates, relationship is done not through primary keys, but through Blog.Reference column. Sample mapings and .cs files: <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> </class> <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> <many-to-one name="Blog" column="BlogId" class="SampleNamespace.BlogEntity,SampleNamespace" property-ref="Reference" /> </class> And class files class BlogEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } } class PostEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } public BlogEntity Blog { get; set; } } Now lets say that i have a Blog with Id 1D270C7B-090D-47E2-8CC5-A3D145838D9C and with Reference 1 In old nhibernate such thing was possible: //this Blog already exists in database BlogEntity blog = new BlogEntity(); blog.Id = Guid.Empty; blog.Reference = 1; //Reference is unique, so we can distinguish Blog by this field blog.Name = "My blog"; //this is new Post, that we are trying to insert PostEntity post = new PostEntity(); post.Id = Guid.NewGuid(); post.Name = "New post"; post.Reference = 1234; post.Blog = blog; session.Save(post); However, in new version, i get an exception that cannot insert NULL into Post.BlogId. As i understand, in old version, for nhibernate it was enough to have Blog.Reference field, and it could retrieve entity by that field, and attach it to PostEntity, and when saving PostEntity, everything would work correctly. And as i understand, new NHibernate tries only to retrieve by Blog.Id. How to solve this? I cannot change DB design, nor can i assign an Id to BlogEntity, as objects are out of my control (they come prefilled as generic "ojbects" like this from external source)

    Read the article

  • SurfaceView drawn on top of other elements after coming back from specific activity

    - by spirytus
    I have an activity with video preview displayed via SurfaceView and other views positioned over it. The problem is when user navigates to Settings activity (code below) and comes back then the surfaceview is drawn on top of everything else. This does not happen when user goes to another activity I have, neither when user navigates outside of app eg. to task manager. Now, you see in code below that I have setContentVIew() call wrapped in conditionals so it is not called every time when onStart() is executed. If its not wrapped in if statements then all works fine, but its causing loosing lots of memory (5MB+) each time onStart() is called. I tried various combinations and nothing seems to work so any help would be much appreciated. @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //Toast.makeText(this,"Create ", 2000).show(); // set 32 bit window (draw correctly transparent images) getWindow().getAttributes().format = android.graphics.PixelFormat.RGBA_8888; // set the layout of the screen based on preferences of the user sharedPref = PreferenceManager.getDefaultSharedPreferences(this); } public void onStart() { super.onStart(); String syncConnPref = null; syncConnPref = sharedPref.getString("screensLayouts", "default"); if(syncConnPref.contentEquals("default") && currentlLayout!="default") { setContentView(R.layout.fight_recorder_default); } else if(syncConnPref.contentEquals("simple") && currentlLayout!="simple") { setContentView(R.layout.fight_recorder_simple); } // I I uncomment line below so it will be called every time without conditionals above, it works fine but every time onStart() is called I'm losing 5+ MB memory (memory leak?). The preview however shows under the other elements exactly as I need memory leak makes it unusable after few times though // setContentView(R.layout.fight_recorder_default); if(getCamera()==null) { Toast.makeText(this,"Sorry, camera is not available and fight recording will not be permanently stored",2000).show(); // TODO also in here put some code replacing the background with something nice return; } // now we have camera ready and we need surface to display picture from camera on so // we instantiate CameraPreviw object which is simply surfaceView containing holder object. // holder object is the surface where the image will be drawn onto // this is where camera live cameraPreview will be displayed cameraPreviewLayout = (FrameLayout) findViewById(id.camera_preview); cameraPreview = new CameraPreview(this); // now we add surface view to layout cameraPreviewLayout.removeAllViews(); cameraPreviewLayout.addView(cameraPreview); // get layouts prepared for different elements (views) // this is whole recording screen, as big as screen available recordingScreenLayout=(FrameLayout) findViewById(R.id.recording_screen); // this is used to display sores as they are added, it displays like a path // each score added is a new text view simply and as user undos these are removed one by one allScoresLayout=(LinearLayout) findViewById(R.id.all_scores); // layout prepared for controls like record/stop buttons etc startStopLayout=(RelativeLayout) findViewById(R.id.start_stop_layout); // set up timer so it can be turned on when needed //fightTimer=new FightTimer(this); fightTimer = (FightTimer) findViewById(id.fight_timer); // get views for displaying scores score1=(TextView) findViewById(id.score1); score2=(TextView) findViewById(id.score2); advantages1=(TextView) findViewById(id.advantages1); advantages2=(TextView) findViewById(id.advantages2); penalties1=(TextView) findViewById(id.penalties1); penalties2=(TextView) findViewById(id.penalties2); RelativeLayout welcomeScreen=(RelativeLayout) findViewById(id.welcome_screen); Animation fadeIn = AnimationUtils.loadAnimation(this, R.anim.fade_in); welcomeScreen.startAnimation(fadeIn); Toast.makeText(this,"Start ", 2000).show(); animateViews(); } Settings activity is below, after coming back from this activity surfaceview is drawn on top of other elements. public class SettingsActivity extends PreferenceActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if(MyFirstAppActivity.getCamera()==null) { Toast.makeText(this,"Sorry, camera is not available",2000).show(); return; } addPreferencesFromResource(R.xml.preferences); } }

    Read the article

  • Magento Onepage Success Conversion Tracking Design Pattern

    - by user1734954
    My intent is to track conversions through multiple channels by inserting third party javascript (for example google analytics, optimizely, pricegrabber etc.) into the footer of onepage success . I've accomplished this by adding a block to the footer reference inside of the checkout success node within local.xml and everything works appropriately. My questions are more about efficiency and extensibility. It occurred to me that it would be better to combine all of the blocks into a single block reference and then use a various methods acting on a single call to the various related models to provide the data needed for insertion into the javascript for each of the conversion tracking scripts. Some examples of the common data that conversion tracking may rely on(pseudo): Order ID , Order Total, Order.LineItem.Name(foreach) and so on Currently for each of the scripts I've made a call to the appropriate model passing the customers last order id as the load value and the calling a get() assigning the return value to a variable and then iterating through the data to match the values with the expectations of the given third party service. All of the data should be pulled once when checkout is complete each third party services may expect different data in different formats Here is an example of one of the conversion tracking template files which loads at the footer of checkout success. $order = Mage::getModel('sales/order')->loadByIncrementId(Mage::getSingleton('checkout/session')->getLastRealOrderId()); $amount = number_format($order->getGrandTotal(),2); $customer = Mage::helper('customer')->getCustomer()->getData(); ?> <script type="text/javascript"> popup_email = '<?php echo($customer['email']);?>'; popup_order_number = '<?php echo $this->getOrderId() ?>'; </script> <!-- PriceGrabber Merchant Evaluation Code --> <script type="text/javascript" charset="UTF-8" src="https://www.pricegrabber.com/rating_merchrevpopjs.php?retid=<something>"></script> <noscript><a href="http://www.pricegrabber.com/rating_merchrev.php?retid=<something>" target=_blank> <img src="https://images.pricegrabber.com/images/mr_noprize.jpg" border="0" width="272" height="238" alt="Merchant Evaluation"></a></noscript> <!-- End PriceGrabber Code --> Having just a single piece of code like this is not that big of a deal, but we are doing similar things with a number of different third party services. Pricegrabber is one of the simpler examples. A more sophisticated tracking service expects a comma separated list of all of the product names, ids, prices, categories , order id etc. I would like to make it all more manageable so my idea to do the following: combine all of the template files into a single file Develop a helper class or library to deliver the data to the conversion template Goals Include Extensibility Minimal Model Calls Minimal Method Calls The Questions 1. Is a Mage helper the best route to take? 2. Is there any design pattern you may recommend for the "helper" class? 3. Why would this the design pattern you've chosen be best for this instance?

    Read the article

  • can a python script know that another instance of the same script is running... and then talk to it?

    - by Justin Grant
    I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original insance before the new instance commits suicide. How can I do this in a cross-platform way? Specifically, I'd like to enable the following behavior: "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it. every few minutes the same script is launched again, but with different command-line parameters when launched, the script should see if any other instances are running. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform. So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another? Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible. I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option. More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them. This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing. But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • What are the principles of developing web-applications with action-based java frameworks?

    - by Roman
    Background I'm going to develop a new web-application with java. It's not very big or very complex and I have enough time until it'll "officially" start. I have some JSF/Facelets development background (about half a year). And I also have some expirience with JSP+JSTL. In self-educational purpose (and also in order to find the best solution) I want to prototype the new project with one of action-based frameworks. Actually, I will choose between Spring MVC and Stripes. Problem In order to get correct impression about action-based frameworks (in comparison with JSF) I want to be sure that I use them correctly (in bigger or lesser extent). So, here I list some most-frequent tasks (at least for me) and describe how I solve them with JSF. I want to know how they should be solved with action-based framework (or separately with Spring MVC and Stripes if there is any difference for concrete task). Rendering content: I can apply ready-to-use component from standard jsf libraries (core and html) or from 3rd-party libs (like RichFaces). I can combine simple components and I can easily create my own components which are based on standard components. Rendering data (primitive or reference types) in the correct format: Each component allow to specify a converter for transforming data in both ways (to render and to send to the server). Converter is, as usual, a simple class with 2 small methods. Site navigation: I specify a set of navigation-cases in faces-config.xml. Then I specify action-attribute of a link (or a button) which should match one or more of navigation cases. The best match is choosen by JSF. Implementing flow (multiform wizards for example): I'm using JSF 1.2 so I use Apache Orchestra for the flow (conversation) scope. Form processing: I have a pretty standard java-bean (backing bean in JSF terms) with some scope. I 'map' form fields on this bean properties. If everything goes well (no exceptions and validation is passed) then all these properties are set with values from the form fields. Then I can call one method (specified in button's action attribute) to execute some logic and return string which should much one of my navigation cases to go to the next screen. Forms validation: I can create custom validator (or choose from existing) and add it to almost each component. 3rd-party libraries have sets of custom ajax-validators. Standard validators work only after page is submitted. Actually, I don't like how validation in JSF works. Too much magic there. Many standard components (or maybe all of them) have predefined validation and it's impossible to disable it (Maybe not always, but I met many problems with it). Ajax support: many 3rd-party libraries (MyFaces, IceFaces, OpenFaces, AnotherPrefixFaces...) have strong ajax support and it works pretty well. Until you meet a problem. Too much magic there as well. It's very difficult to make it work if it doesn't work but you've done right as it's described in the manual. User-friendly URLs: people say that there are some libraries for that exist. And it can be done with filters as well. But I've never tried. It seems too complex for the first look. Thanks in advance for explaning how these items (or some of them) can be done with action-based framework.

    Read the article

  • MVC: returning multiple results on stream connection to implement HTML5 SSE

    - by eddo
    I am trying to set up a lightweight HTML5 Server-Sent Event implementation on my MVC 4 Web, without using one of the libraries available to implement sockets and similars. The lightweight approach I am trying is: Client side: EventSource (or jquery.eventsource for IE) Server side: long polling with AsynchController (sorry for dropping here the raw test code but just to give an idea) public class HTML5testAsyncController : AsyncController { private static int curIdx = 0; private static BlockingCollection<string> _data = new BlockingCollection<string>(); static HTML5testAsyncController() { addItems(10); } //adds some test messages static void addItems(int howMany) { _data.Add("started"); for (int i = 0; i < howMany; i++) { _data.Add("HTML5 item" + (curIdx++).ToString()); } _data.Add("ended"); } // here comes the async action, 'Simple' public void SimpleAsync() { AsyncManager.OutstandingOperations.Increment(); Task.Factory.StartNew(() => { var result = string.Empty; var sb = new StringBuilder(); string serializedObject = null; //wait up to 40 secs that a message arrives if (_data.TryTake(out result, TimeSpan.FromMilliseconds(40000))) { JavaScriptSerializer ser = new JavaScriptSerializer(); serializedObject = ser.Serialize(new { item = result, message = "MSG content" }); sb.AppendFormat("data: {0}\n\n", serializedObject); } AsyncManager.Parameters["serializedObject"] = serializedObject; AsyncManager.OutstandingOperations.Decrement(); }); } // callback which returns the results on the stream public ActionResult SimpleCompleted(string serializedObject) { ServerSentEventResult sar = new ServerSentEventResult(); sar.Content = () => { return serializedObject; }; return sar; } //pushes the data on the stream in a format conforming HTML5 SSE public class ServerSentEventResult : ActionResult { public ServerSentEventResult() { } public delegate string GetContent(); public GetContent Content { get; set; } public int Version { get; set; } public override void ExecuteResult(ControllerContext context) { if (context == null) { throw new ArgumentNullException("context"); } if (this.Content != null) { HttpResponseBase response = context.HttpContext.Response; // this is the content type required by chrome 6 for server sent events response.ContentType = "text/event-stream"; response.BufferOutput = false; // this is important because chrome fails with a "failed to load resource" error if the server attempts to put the char set after the content type response.Charset = null; string[] newStrings = context.HttpContext.Request.Headers.GetValues("Last-Event-ID"); if (newStrings == null || newStrings[0] != this.Version.ToString()) { string value = this.Content(); response.Write(string.Format("data:{0}\n\n", value)); //response.Write(string.Format("id:{0}\n", this.Version)); } else { response.Write(""); } } } } } The problem is on the server side as there is still a big gap between the expected result and what's actually going on. Expected result: EventSource opens a stream connection to the server, the server keeps it open for a safe time (say, 2 minutes) so that I am protected from thread leaking from dead clients, as new message events are received by the server (and enqueued to a thread safe collection such as BlockingCollection) they are pushed in the open stream to the client: message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+200ms Actual behaviour: EventSource opens a stream connection to the server, the server keeps it open until a message event arrives (thanks to long polling) once a message is received, MVC pushes the message and closes the connection. EventSource has to reopen the connection and this happens after a couple of seconds. message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+3200ms This is not OK as it defeats the purpose of using SSE as the clients start again reconnecting as in normal polling and message delivery gets delayed. Now, the question: is there a native way to keep the connection open after sending the first message and sending further messages on the same connection?

    Read the article

  • How to develop an english .com domain value rating algorithm?

    - by Tom
    I've been thinking about an algorithm that should rougly be able to guess the value of an english .com domain in most cases. For this to work I want to perform tests that consider the strengths and weaknesses of an english .com domain. A simple point based system is what I had in mind, where each domain property can be given a certain weight to factor it's importance in. I had these properties in mind: domain character length Eg. initially 20 points are added. If the domain has 4 or less characters, no points are substracted. For each extra character, one or more points are substracted on an exponential basis (the more characters, the higher the penalty). domain characters Eg. initially 20 points are added. If the domain is only alphabetic, no points are substracted. For each non-alhabetic character, X points are substracted (exponential increase again). domain name words Scans through a big offline english database, including non-formal speech, eg. words like "tweet" should be recognized. Question 1 : where can I get a modern list of english words for use in such application? Are these lists available for free? Are there lists like these with non-formal words? The more words are found per character, the more points are added. So, a domain with a lot of characters will still not get a lot of points. words hype-level I believe this is a tricky one, but this should be the cause to differentiate perfect but boring domains from perfect and interesting domains. For example, the following domain is probably not that valueable: www.peanutgalaxy.com The algorithm should identify that peanuts and galaxies are not very popular topics on the web. This is just an example. On the other side, a domain like www.shopdeals.com should ring a bell to the hype test, as shops and deals are quite popular on the web. My initial thought would be to see how often these keywords are references to on the web, preferably with some database. Question 2: is this logic flawed, or does this hype level test have merit? Question 3: are such "hype databases" available? Or is there anything else that could work offline? The problem with eg. a query to google is that it requires a lot of requests due to the many domains to be tested. domain name spelling mistakes Domains like "freemoneyz.com" etc. are generally (notice I am making a lot of assumptions in this post but that's necessary I believe) not valueable due to the spelling mistakes. Question 4: are there any offline APIs available to check for spelling mistakes, preferably in javascript or some database that I can use interact with myself. Or should a word list help here as well? use of consonants, vowels etc. A domain that is easy to pronounce (eg. Google) is usually much more valueable than one that is not (eg. Gkyld). Question 5: how does one test for such pronuncability? Do you check for consonants, vowels, etc.? What does a valueable domain have? Has there been any work in this field, where should I look? That is what I came up with, which leads me to my final two questions. Question 6: can you think of any more english .com domain strengths or weaknesses? Which? How would you implement these? Question 7: do you believe this idea has any merit or all, or am I too naive? Anything I should know, read or hear about? Suggestions/comments? Thanks!

    Read the article

  • Need help to properly remove duplicates in NHibernate

    - by Michael D. Kirkpatrick
    Here is the problem I am having. I have a database with over 100 records in it. I am paging through the data to get 9 results at a time. When I added a check to see if items are active, it caused the results to start doubling up. A little background: "Product" is the actual product line "ProductSkus" are the actual products that exist in the product line When there is more then 1 ProductSku within Product, it causes a duplicate entry to be returned. See the NHibernate Query below: result = this.Session.CreateCriteria<Model.Product>() .Add(Expression.Eq("IsActive", true)) .AddOrder(new Order("Name", true)) .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // This part of the query duplicates the products .CreateAlias("ProductSkus", "ProdSkus", JoinType.InnerJoin) .Add(Expression.Eq("ProdSkus.IsActive", true)) .CreateAlias("ProductToSubcategory", "ProdToSubcat") .CreateAlias("ProdToSubcat.ProductSubcategory", "ProdSubcat") .Add(Expression.Eq("ProdSubcat.ID", subCatId)) // This part takes out the duplicate products - Removes too many items... // Turns out that with .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // it gets 9 records back then the duplicates are removed. // Example: // Total Records over 100 // Max = 9 // 4 Duplicates removed // Yields 5 records when there should be 9 // Why??? This line is ran in NHibernate on the data after it has been extracted from the SQL server. .SetResultTransformer(new NHibernate.Transform.DistinctRootEntityResultTransformer()) .List<Model.Product>(); I added the DistinctRootEntityResultTransformer to clean up the duplicates. The problem is that it pulls 9 records back that contains duplicates. DistinctRootEntityResultTransformer then cleans up the duplicates in the 9 records. I am basically needing a distinct statement to be ran on the SQL server to begin with. However, distinct on SQL is not going to work since NHibernate by default wants to add every field from every table in the select part of the statement. I am only using the fields that belong to the root table to begin with (Model.Product). If I can tell NHibernate to not add the fields to the joined tables into the select part of the statement along with adding Distinct, it would work. I use NHibernare Profiler to see the actual query: SELECT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, prodskus1_.ID as ID373_0_, prodskus1_.Description as Descript2_373_0_, prodskus1_.PartNumber as PartNumber373_0_, prodskus1_.Price as Price373_0_, prodskus1_.IsKit as IsKit373_0_, prodskus1_.IsActive as IsActive373_0_, prodskus1_.IsFeaturedProduct as IsFeatur7_373_0_, prodskus1_.DateAdded as DateAdded373_0_, prodskus1_.Weight as Weight373_0_, prodskus1_.TimesViewed as TimesVi10_373_0_, prodskus1_.TimesOrdered as TimesOr11_373_0_, prodskus1_.ProductID as ProductID373_0_, prodskus1_.OverSizedBoxID as OverSiz13_373_0_, prodtosubc2_.ID as ID362_1_, prodtosubc2_.MasterSubcategory as MasterSu2_362_1_, prodtosubc2_.ProductID as ProductID362_1_, prodtosubc2_.ProductSubcategoryID as ProductS4_362_1_, prodsubcat3_.ID as ID352_2_, prodsubcat3_.Name as Name352_2_, prodsubcat3_.ProductCategoryID as ProductC3_352_2_, prodsubcat3_.ImageID as ImageID352_2_, prodsubcat3_.TriggerShow as TriggerS5_352_2_ FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc If I hand modify the query and run it directly on the SQL server I get the result set I want (I removed all the extra fields in the select section and added DISTINCT): SELECT DISTINCT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc The big question I now must ask is... What must I change in the NHibernate Query to ultimately get the exact same result? Thanks in advance.

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >