Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 112/911 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Nhibernate - Getting Exception when run a simple join query

    - by Muhammad Akhtar
    hi, I am getting issue when I run sql Query having inner join, here is what I am doing very simple ISession session = NHibernateHelper.GetCurrentSession(); string query = string.Format("select Documents.TypeId from Documents inner join DocumentTrackingItems on Documents.Id = DocumentTrackingItems.DocumentId WHERE DocumentTrackingItems.ItemStepId = {0} order by Documents.TypeId asc", 13); System.Collections.ArrayList document = (System.Collections.ArrayList)session.CreateSQLQuery(query, "document", typeof(Document)).List(); I am getting this exception Exception Details: System.IndexOutOfRangeException: Id what's wrong in my query? --- thanks

    Read the article

  • In MSAcess Database, Insert query to insert the character with apostrophe

    - by Suryakavitha
    In MSAcess Database Insert query to insert the character------ N'tetarnyl i have a insert query OleDbCommand cmd = new OleDbCommand("insert into checking values('" + dsGetData.Tables[0].Rows[i][0].ToString() + "','" + dsGetData.Tables[0].Rows[i][1].ToString()+ "')", con); but it is showing me error... syntax error (missing operator) in query expression any idea??? how to write insert query to insert the N'tetarnyl (including apostrophe)

    Read the article

  • What is the Microsoft Query Syntax for Subqueries?

    - by Kuyenda
    I am trying to do a simple subquery join in Microsoft Query, but I cannot figure out the syntax. I also cannot find any documentation for the syntax. How would I write the following query in Microsoft Query? SELECT * FROM ( SELECT Col1, Col2 FROM `C:\Book1.xlsx`.`Sheet1$` ) AS a JOIN ( SELECT Col1, Col3 FROM `C:\Book1.xlsx`.`Sheet1$` ) AS b ON a.Col1 = b.Col1 Is there official documentation for Microsoft Query? Thanks!

    Read the article

  • MySQL query using multiple criteria from checkboxes

    - by jungle_programmer
    I would like to do a multiple search query usig multiple checkboxes which represent particular textboxes. How do i create a mysql query which will be filtering the checked and unchecked checkboxes (probably using if statements)? The query should be able to filter the checked and ucnchecked boxes and query them using the AND condition. Thanks

    Read the article

  • Using Yahoo local search query for iPhone

    - by robbmcmahan
    I'm using yahoo local search in an iPhone app and trying to query everything in a certain location. According to the api docs you can pass "*" to query and it will return everything. I've tried passing it several different ways, including the way below but it does not work unless I actually pass it a real query. Does anyone know how or what I need to pass to make it query everything? [self setQuery:[@"*" stringByAddingPercentEscapesUsingEncoding:NSASCIIStringEncoding]]; Thanks

    Read the article

  • Query between SQL server and Client side

    - by Karim
    I create a query: Select * from HR_Tsalary where month='3' and year ='2010' the result is 473 records and I found 2 duplicate record, then I create another query to find duplicate record only: SELECT Emp_No, COUNT(*) FROM HR_Tsalary WHERE year = '10' AND month = '3'GROUP BY Emp_No HAVING COUNT(*) 1 the result is zero record from client side (thru Visual Basic Adodb code). But when I use same query from server the result is 2 records. Is there any different when create a query between from server side and client side?

    Read the article

  • Dovecot unable to perform mysql query

    - by NathanJ2012
    I have been following the ISPMail tutorials on workaround.org (the 2.9 Wheezy version) and thus far everything has been working fine. When I reached the step to "Testing email delivery" step I noticed a error about the query in the output log from /var/log/mail.log. May 14 06:48:59 mail postfix/pickup[17704]: EA4AD240A98: uid=0 from=<root> May 14 06:48:59 mail postfix/cleanup[17776]: EA4AD240A98: message-id=<[email protected]> May 14 06:48:59 mail postfix/qmgr[17706]: EA4AD240A98: from=<[email protected]>, size=429, nrcpt=1 (queue active) May 14 06:49:00 mail dovecot: auth-worker(17782): mysql(127.0.0.1): Connected to database mailserver May 14 06:49:00 mail dovecot: auth-worker(17782): Warning: mysql: Query failed, retrying: Table 'mailserver.users' doesn't exist May 14 06:49:00 mail dovecot: auth-worker(17782): Error: sql([email protected]): User query failed: Table 'mailserver.users' doesn't exist (using built-in default user_query: SELECT home, uid, gid FROM users WHERE username = '%n' AND domain = '%d') May 14 06:49:00 mail dovecot: lda([email protected]): msgid=<[email protected]>: saved mail to INBOX May 14 06:49:00 mail postfix/pipe[17780]: EA4AD240A98: to=<[email protected]>, relay=dovecot, delay=0.09, delays=0.03/0.01/0/0.06, dsn=2.0.0, status=sent (delivered via dovecot service) May 14 06:49:00 mail postfix/qmgr[17706]: EA4AD240A98: removed I found this rather interesting that it isn't finding the DB so I went back through and checked EVERY file that I touched that involved the DB (including the postfix cf files) and everything is correct so I am baffled at this point, but oddly enough it would seem the email still made it to the correct destination in /var/vmail/domain.com/. Should I be worried about this or am I missing something here? Since it is a message from dovecot it would be the query from dovecot-sql.conf.ext which I am including here driver = mysql connect = host=127.0.0.1 dbname=mailserver user=blocked password=***REMOVED*** default_pass_scheme = PLAIN-MD5 password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

    Read the article

  • Android openvpn + zeroconf browser sending mdns query packets over eth0 instead of tap0 interface on wifi

    - by Mrunal
    On an android device, I am connecting to a remote network using openvpn for performing service discovery. WORKING CASE: After the device is camped on 3g/4g and after connecting to remote network by openvpn, when the zeroconf browser is launched, I can see the mdns query packets being send through the tap0 interface resulting into rendering of services on the browser. From the tcpdump captured on the device, I can see that the mdns query packets are send to tap0 interface. tap0 ip: 192.168.11.200 Route table information: Destination Gateway Genmask Flags Metric Ref Use Iface 76.26.112.234 10.179.240.1 255.255.255.255 UGH 0 0 0 pdpbr1 10.179.240.1 * 255.255.255.255 UH 0 0 0 pdpbr1 32.1.72.136 * 255.255.255.255 UH 0 0 0 pdpbr0 10.179.240.0 * 255.255.255.0 U 0 0 0 pdpbr1 192.168.11.0 * 255.255.255.0 U 0 0 0 tap0 default 192.168.11.1 0.0.0.0 UG 0 0 0 tap0 NOT WORKING CASE: However, after switching on the wifi and connecting it to remote network, when the zeroconf browser is launched, instead of sending the mdns query packets to tap0 interface; these packets are being send to eth0 interface due to which we cannot see the services. From the tcpdump captured on the device, I can see that mdns query packets are send to eth0 interface. tap0 ip: 192.168.11.200 eth0 ip: 192.168.43.230 route table information: Destination Gateway Genmask Flags Metric Ref Use Iface 76.26.112.234 192.168.43.1 255.255.255.255 UGH 0 0 0 eth0 32.1.72.136 * 255.255.255.255 UH 0 0 0 pdpbr0 192.168.11.0 * 255.255.255.0 U 0 0 0 tap0 192.168.43.0 * 255.255.255.0 U 0 0 0 eth0 default 192.168.11.1 0.0.0.0 UG 0 0 0 tap0 In the above case, even though there is a default route for tap0, all the multicast packets are being routed through eth0. How is this possible? Has anyone observed a similar problem and it would be really helpful if you can help us to discover services through zeroconf browser after the device is connected to remote network via openvpn through wifi. Thank You Very much, Mrunal

    Read the article

  • How to take search query and append modifers to the end of it

    - by Kimber
    This is a greasemonkey question. What I'm trying to do is modify an old google discussions script. What were wanting to do is be able to take the google search query and add modifiers to the end of it. Like this: search query: "superuser" modifiers: inurl:greasemonkey+question end result: "superuser" inurl:greasemonkey+question The old script creates a new div within the "hdtb_more_mn" element which is where you get the new discussions tab. However, since the "tbm=dsc" option to do a discussion search has died, this script no longer works. Hence the need to add modifiers to your searches. I tried to edit the script, but it appends the modifiers to the end of the url which includes "&client=firefox-a&hs=8uS&rls=org.mozilla:en-US:official". This means you're also searching for the above as well as your query, which doesn't work. I would like to be able to append the modifiers @ the end of the search querty, rather than the whole URL. I'm just not sure how to code it to where it adds the below "&tbm=" stuff within "discussionDiv.innerHTML" to the end of the query. The google search id seems to be, "gbqfq" for the search box, but I'm not sure how to add this id. Here is the old script // ==UserScript== // @name Add Back Google Discussions // @version 1.4 // @description Adds back the Discussion filters to Google Search // @include *://*.google.tld/search* // ==/UserScript== var url = location.href; if (url.indexOf('tbm=dsc') < 0) addFilterType('dsc', 'Discussions'); function addFilterType(val, name) { var searchType = document.getElementById('hdtb_more_mn'); var discussionDiv = document.createElement('DIV'); discussionDiv.className = 'hdtb_mitem'; discussionDiv.innerHTML = '<a class="q qs" href="'+ (url.replace(/&tbm=[^&]*/g,'') + '&tbm=' + val) +'">'+name+'</a>'; searchType.innerHTML += discussionDiv.outerHTML; } Thanks for any help, or suggestions on who to ask. Google Chrome has an extension for discussion searches, but FF doesn't seem to have one as of yet, which is why I'm trying to modify the above.

    Read the article

  • Passing the CAML thru the EY of the NEEDL

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Passing the CAML thru the EY of the NEEDL Definitions: CAML (Collaborative Application Markup Language) is an XML based markup language used in Microsoft SharePoint technologies  Anonymous: A camel is a horse designed by committee  Dov Trietsch: A CAML is a HORS designed by Microsoft  I was advised against putting any Camel and Sphinx rhymes in here. Look it up in Google!  _____ Now that we have dispensed with the dromedary jokes (BTW, I have many more, but they are not fit to print!), here is an interesting problem and its solution.  We have built a list where the title must be kept unique so I needed to verify the existence (or absence) of a list item with a particular title. Two methods came to mind:  1: Span the list until the title is found (result = found) or until the list ends (result = not found). This is an algorithm of complexity O(N) and for long lists it is a performance sucker. 2: Use a CAML query instead. Here, for short list we’ll encounter some overhead, but because the query results in an SQL query on the content database, it is of complexity O(LogN), which is significantly better and scales perfectly. Obviously I decided to go with the latter and this is where the CAML s--t hit the fan.   A CAML query returns a SPListItemCollection and I simply checked its Count. If it was 0, the item did not already exist and it was safe to add a new item with the given title. Otherwise I cancelled the operation and warned the user. The trouble was that I always got a positive. Most of the time a false positive. The count was greater than 0 regardles of the title I checked (except when the list was empty, which happens only once). This was very disturbing indeed. To solve my immediate problem which was speedy delivery, I reverted to the “Span the list” approach, but the problem bugged me, so I wrote a little console app by which I tested and tweaked and tested, time and again, until I found the solution. Yes, one can pass the proverbial CAML thru the ey of the needle (e’s missing on purpose).  So here are my conclusions:  CAML that does not work:  Note: QT is my quote:  char QT = Convert.ToChar((int)34); string titleQuery = "<Query>><Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where></Query>"; titleQuery += "<ViewFields><FieldRef Name=" + QT + "Title" + QT + "/></ViewFields>";  Why? Even though U2U generates it, the <Query> and </Query> tags do not belong in the query that you pass. Start your query with the <Where> clause.  Also the <ViewFiels> clause does not belong. I used this clause to limit the returned collection to a single column, and I still wish to do it. I’ll show how this is done a bit later.   When you use the <Query> </Query> tags in you query, it’s as if you did not specify the query at all. What you get is the all inclusive default query for the list. It returns evey column and every item. It is expensive for both server and network because it does all the extra processing and eats plenty of bandwidth.   Now, here is the CAML that works  string titleQuery = "<Where><Eq>"; titleQuery += "<FieldRef Name=" + QT + "Title" + QT + "/>"; titleQuery += "<Value Type=" + QT + "Text" + QT + ">" + uniqueID + "</Value></Eq></Where>";  You’ll also notice that inside the unusable <ViewFields> clause above, we have a <FieldRef> clause. This is what we pass to the SPQuery object. Here is how:  SPQuery query = new SPQuery(); query.Query = titleQuery; query.ViewFields = "<FieldRef Name=" + QT + "Title" + QT + "/>"; query.RowLimit = 1; SPListItemCollection col = masterList.GetItems(query);  Two thing to note: we enter the view fields into the SPQuery object and we also limited the number of rows that the query returns. The latter is not always done, but in an existence test, there is no point in returning hundreds of rows. The query will now return one item or none, which is all we need in order to verify the existence (or non-existence) of items. Limiting the number of columns and the number of rows is a great performance enhancer. That’s all folks!!

    Read the article

  • Computer turns off unexpectedly

    - by Shahar
    My computer turns itself off unexpectedly after some time of use. It appears that this might be temperature related, but not for sure. I installed 2 tools that monitor temperature: SpeedFan and CPU Thermometer. The only definite finding is that there is a sensor (labelled temp1 in SpeedFan and CPU in CPU thermometer), which shows a temperature of 108C a second before the computer powers down. Until that moment, this sensor shows a constant temperature of 40C. I can usually reproduce the shutdown by viewing a few movies together, which cause another sensor (labelled CPU in SpeedFan) to go up to 60sC, but I do experience the problem even at times when this sensor remains low and cool. It does seem that the problem is more frequent if the computer is turned back on immediately after shutdown, but not always. I have had other hardware problems recently, which might be related: My hard disk heated up. I installed a fan on it, which worked to reduce the heat. The hard disk sensor shows around 40C. I had occasional blue screens and hard disk failures. Replacing the power supply seems to solve both these issues, but then this powerdown problem began appearing. I would appreciate any suggestions as to how to determine where the fault is, or what needs to be replaced.

    Read the article

  • Ungrounded laptop (Macbook Pro) buzzes in headphones, weird feeling when fingers brush lightly

    - by donut
    I've got a nearly 3-year-old MacBook Pro 15" 2.16GHz (MacBookPro2,2). When I have am not using the extended, grounded adapter for the power supply, just using the simple, two-prong plug I can hear a buzzing when I use very sensitive earbuds. This goes away if I touch a metal part of the laptop. Also, I can feel a weird, fuzzy feeling when I brush the metal parts of the laptop lightly with my fingers/skin. Somewhat similar to feeling of a touching hair or a balloon that's charged with static electricity. But I'm not getting sparks or anything. And if I'm touching a metal part of my laptop solidly (not just brushing it) and then I touch someone else's skin I can feel the same effect and so can my victim. I've noticed similar effects with an ungrounded electric blanket. But with that the buzzing can be easily heard without headphones. Is this a defect, normal, or something else? And what exactly is happening?

    Read the article

  • How expensive to run PC 24/7 or how to figure out how to determine it?

    - by jasondavis
    I realize this question is difficult to answer as it would be different based on users location, what there PC is doing and what hardware it consist of, along with other factors but I am hoping someone could give me a very rough estimate. I have always ran many PC's in my home 24/7 and I am just now looking at it from a money/cost of electric point of view. 1) I live in Central Florida. Can anyone guesstomate/estimate the avaerage monthly or daily cost of running your average PC? Intel quad core processor, 1 SSD drive for OS and programs and 4-5 1-2 TB hard drives in a RAID setup for data. 750watt PSU. What would your guess be? 2) Also is there an accurate way to figure this out (non-super technical and confusing to a non-math person please) Also I have seen those kill-a-watt devices, do they figure this kind of stuff out for you? 3) Does a larger PSU make your PC consume more power? Thanks for any help, you can most likely tell I am somewhat lost about this!

    Read the article

  • PC shut downs automatically after 10-20 second. No POST screen, no beeps

    - by emzero
    I have this not-so-old computer that's not being used for a year or so. Specs: Motherboard: ASUS PN5-E SLI CPU: Intel Core2Duo E4300 RAM:2x2GB SuperTalent DDR2-800 VGA: Zogis GeForce 7950GT PSU: Vitsuba San-55-S 550w HD: No hardrives yet When I power on the computer, everything seem to start, but right away the whole system shuts down. I've removed and changed the RAM sticks, take out the VGA, everything I could think of. So what could it be causing this? The PSU? The motherboard is dead? The CPU? Any help to isolate the problem will be useful. Thanks PS: Please don't close the question, this could be helpful to anybody having a similar problem, even with different hardware. UPDATE I've removed the old thermal paste and apply a brand new one. I also cleaned every dust using a high pressure gas dust remover. Checked for bad capacitors, all of them seem ok. Opened the PSU, removed big giant dust balls, cleaned with high pressure dust remover. Still the same problem, but now it stays powered on for almost 20 seconds maybe. But no POST screen, no beeps at all, nothing. So I suspect it's a motherboard or PSU failure. Unfortunately I don't have an energy tester to test the PSU... Don't know what else to try. I don't have another 775-motherboard to test the CPU, RAM and VGA with it.

    Read the article

  • Piecing together low-powered hardware for an RS-232 terminal server

    - by Fred
    I'm working on reconstructing my Cisco lab for training/educational purposes and I found that the actual terminal server I have is dead. I have a couple of 8-port PCI serial cards which would be more than ample for my lab, but I don't want to leave my personal computer running to be able to access the console ports. Ideally I would access the terminal server remotely, either by SSH/RDP to the box (depending on what OS I go with) or by installing a software package that allows me to telnet directly to a serial port. I know I've found a program that does this under Linux in the past but its name escapes me at the moment. I'm thinking about scavenging for some old hardware, on eBay or something, to put together a low-powered PC. Needs to be something that: Has Low-power consumption Has at least 2 PCI slots (though I certainly wouldn't complain about having more) Has onboard Ethernet (or, if not, another PCI or ISA slot (not shared)) Can be headless once an OS installed (probably Linux) I'm currently leaning towards an old fashioned Pentium (sub-133MHz era) but I am wondering if anybody else knows of another platform/mobo that would suit these needs. Alternatively, I've been considering buying a Raspberry Pi and a big USB hub along with a bunch of USB-Serial adapters but this sounds like it'd get messy quick with cables and adapters all over the place, and I may not even have the same ttyS#'s between boots.

    Read the article

  • Intermittent Trouble Entering Hibernate on WinXP

    - by kquinn
    My personal desktop, running 32-bit Windows XP SP2 (with 4GB RAM, 2.75GB addressable, swap disabled, hiberfil.sys existing and contiguous on C:\; SP3 is not installed because SP2 has been working fine and I do not want to re-qualify with SP3 just for sheer perversity) typically gets hibernated at night. For a long time this worked great, but recently the machine has had trouble entering hibernation. Sometimes when I press my power button (configured to hibernate), the box will start the procedure for hibernating (i.e., go to the blue "Windows XP" background logo and display a message about entering hibernation), but before displaying the usual blue-on-black hibernation progress bar it will drop back to the desktop. No error messages appear, on screen or in the system log. The only record of unsuccessful hibernation attempts in the system log, which proudly proclaims that "The Windows Image Acquisition (WIA) service entered the running state." once per failed hibernation attempt. The problem is almost certainly resource related: if I then close one or more applications which are running, and repeat the exact same process, the machine will hibernate perfectly. There does not appear to be a reliable high-water mark for virtual or physical memory use, below which the machine is guaranteed to hibernate; it's different every time (though typically, below about 1.1–1.4 GB memory usage seems to be where hibernate succeeds most often). Memory may not even be the relevant resource; as far as I know, it could also be handles or sockets. This behavior is relatively recent: it has only started in the last few months; before then, I could hibernate reliably no matter what the current resource use of the system. This machine claims to have hotfix Q909095 installed, but since the symptoms of my problem match KB909095 rather well, I'm suspicious if this fix is actually working as intended. Any ideas on how to fix this or where to start debugging?

    Read the article

  • PC dies when running at 100% CPU

    - by user155631
    I recently wrote some Java code to generate images of the Mandelbrot set (fractal). I made use of the new Fork/Join facility in Java 7 to run separate threads on all four cores (2 real, 2 virtual)simultaneously, using a large number of iterations for greater accuracy. The problem is, the process runs fine for about a minute, and then it's as if someone has pulled the plug and the PC just dies. I thought it must be the CPUs overheating, so I ran Real Temp to monitor the temperature. It's an Intel i3 processor. I can see the temperature creeping up to 70 degrees, and then it seems to level off there and run for about another 30 seconds before dying. According to Real Temp, there's still a gap of 35 degrees between the actual temperature and TJ max. I also tried disabling "CPU TM function" in the BIOS, but the problem still occurs. A colleague suggested that it might be a power supply problem, so I borrowed a more powerful PSU (can't remember what wattage it was, but it's higher than mine which is 500W). The exact same thing still happens though. Is anyone able to suggest what the problem might be, or what I can try next?

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    Hey guys, First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

  • Pc sometimes turns on sometimes not

    - by cprogcr
    Some time ago, the PC gave the same problem. It wouldn't turn on. When i pressed the button, it turned on but showed nothing. I had replaced the CPU and that seemed to work. I didn't use the PC that much, rarely you know. But now, after some time, it gives the same problem. It turns on, the front light is on, it makes the normal noise the pc makes when it's turned on , but if I try to shut it down by holding the power button it just doesn't work. So again, I tried replacing the CPU and it worked again. I kept it all day working, just to be sure, and sometimes I would restart it and it would work again. No problems at all. So I turned it off at night, and next morning it just would make the same problem. So I tried replacing the PSU. And it worked again. Now while I had the PC with the new PSU, i tried to insert the old CPU, and again, it would turn on. The same thing, tried restarting too, and it would work. But this morning the same problem happened. Edit: I also tried another CPU today and yet no signs of working. I don't know now what to think.

    Read the article

  • Computer will freeze/ lock up after doing relatively stressful things

    - by GrowingCode247
    I'll first start off by saying that the issue GENERALLY doesn't occur unless I'm doing something remotely stressful for my computer. This issue used to occur whenever it felt it was necessary, however has not occurred completely randomly for a while now (thankfully) My computer's specs: CPU: AMD Phenom II X4 960T GPU: GeForce GTX 760 Memory: 16 GB RAM Resolution Used: 1680x1050, 59Hz (strange number for refresh rate?) res is highest for monitor Nvidia Driver version: 331.65 OS: Microsoft Windows 7 Ultimate (64-bit) Sometimes I will be able to go 2-3 games (about an hour, depending) and sometimes it will go maybe one game (20-30 minutes) and then my computer will run sluggishly and leave me unable to do much of anything. I can sometimes interact with programs at a very basic level (maximizing, minimizing), and I usually cannot close them in any way, not even through Task Manager. The highest temperature my GPU reaches is 76C, with the average being around 73C. During the time the temperatures are around 73C, my GPU's RAM usage is anywhere between 1250-1300 (out of 2GB). My CPU's temperature never goes over 60C, thankfully. The PSU should be fine. It's very mildly dusty but I feel as though that would not be causing this problem... I will clean it out as soon as everything else has been ruled out. Honestly I have no clue how to test the PSU for problems - same goes for my Motherboard. I cannot really think of what could be causing these freezes otherwise. Event Viewer details: EventID: 1 - VDS Basic Provider (I've no clue what this is) EventID: 3 - Kernel-EventTracing (Again, lost) EventID: 8003 - bowser (this seems fishy) and the one critical that I know others have been dealing with as I've browsed some other responses on the web: EventID: 41 - Kernel-Power any help to solve this problem would be GREATLY appreciated.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • PHP PSR-0 + several namespaces in one file and autoload

    - by Nemoden
    I've been thinking for a while about defining several namespaces in one php file and so, having several classes inside this file. Suppose, I want to implement something like Doctrine\ORM\Query\Expr: Expr.php Expr |-- Andx.php |-- Base.php |-- Comparison.php |-- Composite.php |-- From.php |-- Func.php |-- GroupBy.php |-- Join.php |-- Literal.php |-- Math.php |-- OrderBy.php |-- Orx.php `-- Select.php It would be nice if I had all of this in one file - Expr.php: namespace Doctrine\ORM\Query; class Expr { // code } namespace Doctrine\ORM\Query\Expr; class Func { // code } // etc... What I'm thinking of is directories naming convention and, unlike PSR-0 having several classes and namespaces in one file. It's best explained by the code: ls Doctrine/orm/query Expr.php that's it - only Expr.php Since Expr.php is somewhat I call a "meta-namespace" for Expr\Func, it make sense to place all the classes inside Expr.php (as shown above). So, the vendor name is still starts with an uppercased letter (Doctrine) and the other parts of namespace start with lowercased letter. We can write an autoload so it would respect this notion: function load_class($class) { if (class_exists($class)) { return true; } $tokenized_path = explode(array("_", "\\"), DIRECTORY_SEPARATOR, $class); // array('Doctrine', 'orm', 'query', 'Expr', 'Func'); // ^^^^ // first, we are looking for first uppercased namespace part // and if it's not last (not the class name), we use it as a filename // and wiping away the rest to compose a path to a file we need to include if (FALSE !== ($meta_class_index = find_meta_class($tokenized_path))) { $new_tokenized_path = array_slice($tokenized_path, 0, $meta_class_index); $path_to_class = implode(DIRECTORY_SEPARATOR, $new_tokenized_path); } else { // no meta class found $path_to_class = implode(DIRECTORY_SEPARATOR, $tokenized_path); } if (file_exists($path_to_class.'.php')) { require_once $path_to_class.'.php'; } return false; } Another reason to do so is to reduce a number of php files scattered among directories. Usually you check file existence before you require a file to fail gracefully: file_exists($path_to_class.'.php'); If you take a look at actual Doctrine\ORM\Query\Expr code, you'll see they use all of the "inner-classes", so you actually do: file_exists("/path/to/Doctrine/ORM/Query/Expr.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/AndX.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Base.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Comparison.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Composite.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/From.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Func.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/GroupBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Join.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Literal.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Math.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/OrderBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Orx.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Select.php"); in your autoload which causes quite a few I/O reads. Isn't it too much to check on each user's hit? I'm just putting this on a discussion. I want to hear from another PHP programmers what do they think of it. And, of course, if you have a silver bullet addressing this problems I've designated here, please share. I also have been thinking if my vogue question fits here and according to the FAQ it seems like this question addresses "software architecture" problem slash proposal. I'm sorry if my scribble may seem a bit clunky :) Thanks.

    Read the article

  • 12.10 update breaks NFS mount

    - by TarekEldeeb
    I've just upgraded to the latest 12.10 beta. Rebooted twice. The problem is with the NFS folders not mounting, here's a verbose log. # mount -v myserver:/nfs_shared/tools /tools/ mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Mon Oct 1 11:42:28 2012 mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: Connection timed out My IP is 109, the server is 22. Just before the last upgrade, I was able to mount normally. Other PCs on my network are able to mount too, it's not a server issue. How can I analyse the problem? Certain log files? Any help?

    Read the article

  • Powershell SQL query--connection string

    - by sean
    I am trying to query several different SQL servers and run a command on each of them. I am unable to get the connection string right. Code, below. I receive the following error:Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. I thought if I passed it the credentials it wouldn't care about the domain. How do I get around this? Thanks in advance. $serverList = @(Get-Content "c:\AllServers.txt") $query = "SELECT COUNT(thing) AS [RowCount] FROM My_table" $Database = "My_DB" # Read a file foreach ( $svr in $serverList ) { $conn=new-object System.Data.SqlClient.SQLConnection $ConnectionString = "Server={0};Database={1};User ID=sa;Password=Password;Integrated Security=True" -f $svr, $Database $conn.ConnectionString=$ConnectionString $conn.Open() $cmd=new-object system.Data.SqlClient.SqlCommand($Query,$conn) $conn.Close() }

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >