Search Results

Search found 11073 results on 443 pages for 'higher kinded types'.

Page 90/443 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How Can I Improve This Card-Game AI?

    - by James Burgess
    Let me get this out there before anything else: this is a learning exercise for me. I am not a game developer by trade or hobby (at least, not seriously) and am purely delving into some AI- and 3D-related topics to broaden my horizons a bit. As part of the learning experience, I thought I'd have a go at developing a basic card game AI. I selected Pit as the card game I was going to attempt to emulate (specifically, the 'bull and bear' variation of the game as mentioned in the link above). Unfortunately, the rule-set that I'm used to playing with (an older version of the game) isn't described. The basics of it are: The number of commodities played with is equal to the number of players. The bull and bear cards are included. All but two players receive 8 cards, two receive 9 cards. A player can win the round with 7 + bull, 8, or 8 + bull (receiving double points). The bear is a penalty card. You can trade up to a maximum of 4 cards at a time. They must all be of the same type, but can optionally include the bull or bear (so, you could trade A, A, A, Bull - but not A, B, A, Bull). For those who have played the card game, it will probably have been as obvious to you as it was to me that given the nature of the game, gameplay would seem to resemble a greedy algorithm. With this in mind, I thought it might simplify my AI experience somewhat. So, here's what I've come up with for a basic AI player to play Pit... and I'd really just like any form of suggestion (from improvements to reading materials) relating to it. Here it is in something vaguely pseudo-code-ish ;) While AI does not hold 7 similar + bull, 8 similar, or 8 similar + bull, do: 1. Establish 'target' hand, by seeing which card AI holds the most of. 2. Prepare to trade next-most-numerous card type in a trade (max. held, or 4, whichever is fewer) 3. If holding the bear, add to (if trading <=3 cards) or replace in (if trading 4 cards) hand. 4. Offer cards for trade. 5. If cards are accepted for trade within X turns, continue (clearing 'failed card types'). Otherwise: a. If only one card remains in the trade, go to #6. Otherwise: i. Remove one non-penalty card from the trade. ii. Return to #5. 6. Add card type to temporary list of failed card types. 7. Repeat from #2 (excluding 'failed card types'). I'm aware this is likely to be a sub-optimal way of solving the problem, but that's why I'm posting this question. Are there any AI- or algorithm-related concepts that I've missed and should be incorporating to make a better AI? Additionally, what are the flaws with my AI at present (I'm well aware it's probably far from complete)? Thanks in advance!

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Why does Mass Effect 1 run so slow on my machine if I have an XFX NVidia 9400GT video card? [closed]

    - by Papuccino1
    I so sick and tired of having my components pass the minimum requirements of a game and then I get 15 FPS on the game on everything low. Should't PC developers say 'use at least this video card for a smooth 30 FPS'? Here are my specs: Windows 7 2GB DDR2 RAM XFX Nvidia 9400gt Intel Pentium D Dual Core 2.8ghz I should be at LEAST getting 30 FPS on everything low right? Please tell me what I can do to make games run as they should, or is my video card not good for these games? Here are the recommended requirements from the official site: Recommended System Requirements for Mass Effect on the PC Operating System: Windows XP or Vista Processor: 2.6+GHZ Intel or 2.4+GHZ AMD Memory: 2 Gigabyte Ram Video Card: NVIDIA GeForce 7900 GTX or higher. ATI X1800 XL series or higher Hard Drive Space: 12 Gigabytes Sound Card: DirectX 9.0c compatible sound card and drivers – 5.1 sound card recommended My videocard is 9400GT, how is that worse than a 7900GTX? :S Edit 2: I should note, that I get poor frames when running the game in absolute BOTTOM specs. lowest resolution, no particles, etc. etc. Absolute ZERO and getting poor framerates.

    Read the article

  • Laptop Asus P50IJ with Intel 4500M GMA output going to a Dell 1907FP external monitor will not allow

    - by ProfessionalAmateur
    Hello - I just purchased an Asus P50IJ-X2 laptop which has a Intel GMA 4500M video card running Windows7. At work I output this laptop to a Dell 1907FP LCD which has a maximum resolution of 1280x1024. Not matter what I do the Windows will not allow the laptop to set a resolution higher than 1024x768 to this LCD monitor. Ive even gone to the extent of downloading PowerStrip (I'd post a link but Im new and can only enter 1 url, if you google for powerstrip its the first option) to create a custom driver for my monitor thinking Windows was having a hard time seeing the available resolutions it would accept. However, powerstrip read the registery and properly sees the monitor and what its capable of so Im now at a complete loss as to why Windows7 will not allow me to set/use a 1280x1024 resolution for this external monitor (as my last laptop did running Vista). The Intel documentation (http://software.intel.com/en-us/articles/quick-reference-guide-to-intel-integrated-graphics/) indicates that the GMA 4500M should be able to run up to a 2560x1600 max res. The Dell 1907FP specification states it can run up to a 1280x1024 res. But no matter what the computer will not allow me to set higher than a 1024x768. I'm completely baffled but I would really like to be able to output this laptop to a reasonable resolution, 1024x768 makes me feel like I'm using my mom's computer. Any help would be greatly appreciated! Here are some attached images (I apologize for the links, being new I cannot post images) that should help explain this better: Image 1 - This image is from powerstrip which shows the monitors max accepted resolution and at the top right the max res my PC currently allows. (http://imgur.com/agrno.png) Image 2 - This shows my Windows7 resolution picker. (http://imgur.com/3nv6q.png) Image 3 - The 'List all modes' option taken from the Screen Resolution Advanced Settings List All Modes. (http://imgur.com/AMREh.png) Image 4 - Monitor information from registry read by powerstrip, this shows the laptop is able to read the necessary info from the LCD monitor. (http://imgur.com/hUX4D.png)

    Read the article

  • How to set up QoS on ADSL router (terracom) for prioritizing browsing

    - by DBZ_A
    I want to configure the ADSL router which connects 10+ machines to the internet. I want to give maximum priority to browsing (ports 80,443) and set low priority for bittorrent etc.(port 42180) I have been experimenting with settings , but with no luck. There are three settings which confuse me, along with my understanding. 802.1 Priority - Related to LAN level, possible values 0-7 , higher numbers means higher priority. 'Mark traffic priority' - clueless about this. IPP/DS - IP Precedence - possible values 0-7 ; 6 & 7 are reserved, so set 5 for highest priority. Or when using DSCP - set 46 for highest priority. Please help me in getting this done... Similer question for another model of router here , but with less number of confusing options :) How to configure QoS on home router Update: from discussion on another thread, QoS can control only upstream traffic (from router to the internet) , while this may in turn affect downstream traffic rate, there is no direct control over data coming into the router.

    Read the article

  • What speed are Wi-Fi management and control frames sent at?

    - by Bryce Thomas
    There are a bunch of different 802.11 Wi-Fi standards, e.g. 802.11a, 802.11b, 802.11g, 802.11n etc. that all support different speeds. Wi-Fi frames are generally categorised as one of the following: Data frames - carry the actual application data Control frames - coordinate when its safe to send/reduce collisions Management frames - handle connection discovery/setup/tear down (e.g. AP discovery, association, disassociation) My question is about whether all these frames, and specifically management frames, are transmitted at the fastest supported speed available, or whether certain classes of frames are transmitted at some lowest common denominator speed. I have noticed that when I put an 802.11b/g only device into monitor mode and capture traffic over the air, I still see management frames (e.g. association/disassociation) being transmitted between my phone and AP which are both 802.11n, even though 802.11n has a higher transfer rate. So I am imagining one of two possibilities: My 802.11n phone/AP had to negotiate a slower speed for some reason and that's why I can see their frames on my 802.11b/g monitoring device. Management frames (and perhaps control frames also?) are sent at a lower speed, and it's only data frames that are transmitted faster with newer 802.11 standards. The reason I would like to know which one of these two possibilities (or perhaps a third possibility) is the case is that I want to capture management frames, and need to know whether using an 802.11b/g card is going to lead to me missing some frames sent at higher speeds than the monitoring card can observe. If management frames are indeed sent at a slower rate, then it's all good. If I just happen to be seeing the management frames because my phone/AP have negotiated a slower rate though, then I need to reconsider what card I use for packet capture.

    Read the article

  • The bottlenecks of any computer, what to look for?

    - by WebDevHobo
    Whether it is a laptop or a desktop, any computer is made up of several pieces of hardware that communicate with each other. Sending data back and forth to ensure that the user gets the desired results. I have seen some theoretical stuff on computers & hardware, but I wonder how it all comes together. CPU RAM Graphics Card L1 CACHE L2 CACHE L3 CACHE FSB ... And all other things. Which is the biggest bottle neck? Why would a person not want/need a big value in one of those categories in certain situations? P.S.: when reading the specs of the i5 750 processor, I came across this description: In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. What is this, and how does it compare to FSB? EDIT: I am not planning to buy a computer at all. The goal of this question is to understand the internal relation of various hardware pieces, their specific functions and how they work together. For instance, I have heard to a somewhat higher-than-usual amount of L2/L3 Cache can help speed up your computer. What's up with saying that? Also I forgot to mention Hard-disk RPM.

    Read the article

  • Tools for displaying a multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • What are the frequencies of current in computers' external peripheral cables and internal buses?

    - by Tim
    From Wikipedia, three different cases of current frequency are discussed along with the types of cables that are suitable for them: An Extra Ordinary electrical cables suffice to carry low frequency AC, such as mains power, which reverses direction 100 to 120 times per second (cycling 50 to 60 times per second). However, they cannot be used to carry currents in the radio frequency range or higher, which reverse direction millions to billions of times per second, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the power from reaching the destination. Transmission lines use specialized construction such as precise conductor dimensions and spacing, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. Types of transmission line include ladder line, coaxial cable, dielectric slabs, stripline, optical fiber, and waveguides. The higher the frequency, the shorter are the waves in a transmission medium. Transmission lines must be used when the frequency is high enough that the wavelength of the waves begins to approach the length of the cable used. To conduct energy at frequencies above the radio range, such as millimeter waves, infrared, and light, the waves become much smaller than the dimensions of the structures used to guide them, so transmission line techniques become inadequate and the methods of optics are used. I wonder what the frequencies are for the currents in computers' external peripheral cables, such as Ethernet cable, USB cable, and in computers' internal buses? Are the cables also made specially for the frequencies? Thanks!

    Read the article

  • Erratic WiFi 2.4 GHz channel spikes, what gives?

    - by Francis W. Usher
    Sorry guys, first a gripe about my neighbor's WiFi access point (it is related): they totally hog the center nine 2.4 GHz channels (3-11), centered right at 7! I know the outer regions of the signal don't make as much of a difference, and technically they're running channels 5 & 9. Anyway, their signal is clearly interfering with mine, which is necessarily centered at 3 or 11 to evade their interference. I guess it's somewhat a case of access point envy: they happen to have both a stronger signal and a higher data rate, while occupying twice the band width that I do. Getting to the point, I've noticed that they tend to sit nice and pretty centered at 7, but they definitely auto-select their channel, and I've noticed that the auto-selection algorithm tends to shift towards the higher channels; hence I decided to pick channel 3, and I don't get so many intermittent lag spikes any more. Anyway, the thing that weirded me out was the reason they have to auto-select sometimes: unexplained, powerful (talking order of 0dB here), giant spikes of 2.4 GHz activity in consistent regions of the spectrum. I don't think it's just noise, since my wireless monitoring software is registering a MAC address, a manufacturer, and usually a fairly coherent ascii name... and it seems to be a fairly well-confined signal. But these signals are fairly common, and they do some weird stuff to my signal. So my question is what are these signals? Where are they coming from? Where are they going? Why are they so ridiculously strong? Why don't they ever last very long? Here's an inSSIDer screenshot I took, for your perusal. I am labeled with "me", my greedy neighbor labeled with "neighbor", and the 2 quasar signals are labeled with "WTF?".

    Read the article

  • Linux kernel - can't access sda16 & sda17

    - by osgx
    I can't access sda16 sda17 and higher partitions from my linux. This linux is rather debian (very old); kernel 2.6.23. So, I know that so old linux kernel can't access 16 partitions on single sata disk. What version of kernel should I use to be able access sda16, sda17 etc? I want to update only a kernel, not a whole Linux distribution. PS. There is an WindowsNT kernel which can access and format 16, 17 or higher partition, but my intention is to use sda16 and sda17 from linux (I want Linux Kernel). PPS: dmesg: sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 sda8 sda9 sda10 sda11 sda12 sda13 sda14 sda15 > sd 2:0:0:0: [sda] Attached SCSI disk sd 4:0:0:0: [sdb] xxx 512-byte hardware sectors ... So, there is no mapping of sda16, sda17, ... to sdb. Sdb is the second physical hard drive.

    Read the article

  • Simple electric DC question. Currency consumption

    - by Bobb
    Suppose you have DC power supply and a consumer connected to it (i.e. computer PSU and a hard drive). Suppose PSU which was supplied with the consumer has output 5V 1A. So I assume that the consumer should not consume more than 1A. Suppose the original PSU is broken now and I want to replace it with the one I have which is 5V 10A. My guess is that current is something which depends on the consumer. So if the consumer consumes normally 1A then it will not consume more than that even if it is connected to 10A PSU. In other word - am I right assuming that the consumer will not burn out being connected to a power supply with higher current output? P.S. my understanding is that voltage is something independent from the consumer. If you give it higher voltage it will burn (voltage is from PSU to the consumer). However current must be in opposite - consumer sucks as much current as it need not as much as PSU can provide (of course given that max PSU current is greater than the consumer needs)

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • Simple electric DC question. Current consumption

    - by Bobb
    Suppose you have DC power supply and a consumer connected to it (i.e. computer PSU and a hard drive). Suppose PSU which was supplied with the consumer has output 5V 1A. So I assume that the consumer should not consume more than 1A. Suppose the original PSU is broken now and I want to replace it with the one I have which is 5V 10A. My guess is that current is something which depends on the consumer. So if the consumer consumes normally 1A then it will not consume more than that even if it is connected to 10A PSU. In other word - am I right assuming that the consumer will not burn out being connected to a power supply with higher current output? P.S. my understanding is that voltage is something independent from the consumer. If you give it higher voltage it will burn (voltage is from PSU to the consumer). However current must be in opposite - consumer sucks as much current as it need not as much as PSU can provide (of course given that max PSU current is greater than the consumer needs)

    Read the article

  • Rendering a frame is producing noise from speakers in Windows and Linux

    - by Robber
    When any hardware accelerated application is rendering a frame (or many of them) a very short noise is coming from my speakers. This can be a game, a WebGL application or XBMC. When the application/game is rendering many frames per second (like most of them do) the noise is a continuous buzzing that gets higher pitched with higher framerates. This applies to Linux and Windows, so I'd assume it's a hardware problem. The current hardware in the PC is: CPU: Core2Quad Q9550 GPU: Radeon HD 5770 RAM: 2x2GB DDR2 Motherboard: Asus P5QLD PRO PSU: be quiet! Pure Power 530W Screen and speakers: Old 720p LCD TV connected via VGA and aux cable Muting the TV stops the noise, muting Windows doesn't. I tried replacing the PSU first (used a Tagan 700W PSU before) because I thought it was a power problem. It wasn't. I tried replacing the motherboard (used a ASUS P5B SE before) next because I thought it was a sound card problem. It wasn't. I tried the GPU in a different PC because I thought it was a broken graphics card. It worked perfectly fine in the other PC. I thought it might be interference, but moving the audio cable around changes absolutely nothing. I tried using an HDMI cable instead and that did work, but is not an option since my TV has only one HDMI input and I need that for my PS3.

    Read the article

  • LAMP Stack Version Help -- Is there a website or version tracker source to help suggest the right versions of each part of a platform stack?

    - by Chris Adragna
    Taken singly, it's easy to research versions and compatibility. Version information is readily available on each single part of a platform stack, such as MySQL. You can find out the latest version, stable version, and sometimes even the percentage of people adopting it by version (personally, I like seeing numbers on adoption rates). However, when trying to find the best possible mix of versions, I have a harder time. For example, "if you're using MySQL 5.5, you'll need PHP version XX or higher." It gets even more difficult to mitigate when you throw higher level platforms into the mix such as Drupal, Joomla, etc. I do consider "wizard" like installers to be beneficial, such as the Bitnami installers. However, I always wonder if those solutions cater more to the least common denominator -- be all to many -- and as such, I think I'd be better to install things on my own. Such solutions do seem kind of slow to adopt new versions, slower than necessary, I suspect. Is there a website or tool that consolidates versioning data in order to help a webmaster choose which versions to deploy or which upgrades to install, in consideration of all the other parts of the stack?

    Read the article

  • reaching constant video bitrate by using ffmpeg

    - by sebastian
    We use ffmpeg and a transcoding script for transcoding and want to make some batch files which we can use for transcoding. For example I use a parameter called video_kbit and if I am writing in 30000 it should reach 30 Mbit. Of course if I use 6000 as parameter it should reach 6 MBit as well, so I have one script which reaches every video bitrate I want. As my settings are now, I only reach 18.1 Mbit. Only when I use 15000 as a parameter I am reaching my goal for a constant video bitrate of 15 MBit. If I use 8000 as parameter I get 10.1 MBit as a result. So under 15000, I get a higher bitrate and over 15000, I get a lower bitrate than I want. My presettings are: ffmpeg -threads "4" -i "$2" -f mp4 -c:v libx264 -crf 1 \ -bufsize 30000k -maxrate ${FC_PARAM_video_kbit}k \ -acodec libfaac -ac 2 -ab ${FC_PARAM_audio_kbit}k -ar 44100 \ -pix_fmt yuv420p -vf scale=${FC_PARAM_width}:${FC_PARAM_height} -y "$3" And I am using these parameters: FC_PARAM_video_kbit = 30000 FC_PARAM_audio_kbit = 192 FC_PARAM_width = 1920 FC_PARAM_height = 1080 I have tried using a higher bufsize and using profile:v and level settings, but nothing got me near the constant video bitrate of 30000 Mbit. Do you guys have any ideas or suggestions for a better way to reach my goal?

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • Minecraft server hosting hardware specifications [on hold]

    - by Andrew Wright
    I am planning on purchasing a server to rent off Minecraft game servers, largely to friends. I am planning on purchasing a 128GB RAM server to save on colocation costs (as I am likely to need more than 32GB and would have to rent 2U of space...) I am hoping for some advice about the processing power needed to deal with this level of RAM. The servers will be run in a shared environment on linux in a VM to make backups easier. The server I have in mind is dual CPU. I have been considering at the low end dual Xeon E5-2609V2 Quad-Core 2.5Ghz, and at the high end dual Xeon E5-2650V2 Eight-Core 2.6Ghz. The difference between these is 6.4 GT/s and 8 GT/s and £3000 for the lower spec server, £4300 for the higher spec. I was hoping I could get advice about whether it is worth paying for the extra/higher speed processor or if I would be wasting my money? Thank you for any help - I appreciate that this is not directly related to professional system administration.

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • .htaccess - RewriteRules working, but browser address bar displaying full (unfriendly) URL

    - by axol
    Hey there, Haven't been able to find a solution to this around the net or these forums - apologise if I've missed something! My .htaccess RewriteRules are working well - have search-engine and user -friendly links in my web pages, and unfriendly database URLs running hidden in the background. Except when I added a RewriteRule to add "www." to the front of URLs if the user didn't enter it - to ensure only one appears in search engines. Here's what now happening, and I can't figure out why! My friendly URL structure for content is like this, and the database query string uses the first "importantword": www.example.com/importantword-nonimportantword/ .htaccess snippet: Options +FollowSymLinks Options -Indexes RewriteEngine on RewriteOptions MaxRedirects=10 RewriteBase / RewriteRule ^/$ index.php [L] RewriteRule ^(.*)-(.*)/overview/$ detail.php?categoryID=$1 [L] RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^(.*)$ http://www.example.com/$1 [L] What's happening since I added the last 2 lines: CASE 1: user types (or clicks) www.example.com/honda-vehicle/overview/ - Works correctly - They are taken to the correct page and the browser URL bar says: www.example.com/honda-vehicles/overview/ CASE 2: user types example.com - Works correctly - They are taken to www.example.com and the browser URL bar says: www.example.com CASE 3: user types (or clicks) example.com/honda-vehicles/overview/ i.e. without the prefix "www" - Does NOT work correctly - They are taken to the right page, but the browser URL bar displays the unfriendly URL: www.example.com/detail.php?categoryID=honda I figure there's some issue with the order of the RewriteRules, but it's doing my head in trying to logically step through it and figure it out! Any assistance at all or pointers would be most appreciated!

    Read the article

  • Rspec2, Rails3, Authlogic: Can't run specs

    - by Sam
    When I do rspec spec in my rails project, I get No examples were matched. Perhaps {:if=>#<Proc:0x0000010126e998@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:50 (lambda)>, :unless=>#<Proc:0x0000010126e970@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:51 (lambda)>} is excluding everything? Finished in 0.00004 seconds 0 examples, 0 failures Now, this seems like maybe if I wrote a spec it would work, but as soon as I write a spec (and I do include spec_helper) /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) from /{myapp}/app/models/user_session.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:138:in `block (2 levels) in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `block in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:108:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application/finisher.rb:41:in `block in <module:Finisher>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `instance_exec' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:50:in `block in run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:134:in `initialize!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:77:in `method_missing' from /{myapp}/config/environment.rb:5:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/spec_helper.rb:3:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/controllers/pages_controller_spec.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `block in load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `map' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/command_line.rb:18:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:55:in `run_in_process' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:46:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:10:in `block in autorun' The important line here seems to be /core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) Now if this were rails 2.3.8, I'd simply put config.gem "authlogic" into the environment.rb, in the initialization code block. However, the rails 3 environment.rb looks way different (there is no config code block, so putting it in arbitrarily causes an error where config is not defined). So my questions are 1) Do I actually have to put the gem config anywhere? I looked at https://github.com/trevmex/authlogic_rails3_example/ and it seems he didn't put it anywhere. 2) Does anyone know what I'm doing wrong in terms of rspec? My gem list is *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) actionpack (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activemodel (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) activerecord (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activeresource (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activesupport (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) arel (2.0.6, 1.0.1) asdf (0.5.0) authlogic (2.1.6, 2.1.3) autotest (4.4.6, 4.4.1) autotest-fsevent (0.2.4) autotest-growl (0.2.9) autotest-rails (4.1.0) autotest-rails-pure (4.1.2) bluecloth (2.0.9) builder (2.1.2) bundler (1.0.7, 1.0.2) cgi_multipart_eof_fix (2.5.0) commonwatir (1.6.2) couchrest (0.33) cri (1.0.1) cucumber (0.4.4, 0.4.3, 0.3.11) daemons (1.1.0, 1.0.10) dependencies (0.0.7) diff-lcs (1.1.2) erubis (2.6.6) fastercsv (1.5.0) fastthread (1.0.7) firewatir (1.6.2) flay (1.4.0) flog (2.2.0) funfx (0.2.2) gem_plugin (0.2.3) gemsonrails (0.7.2) giraffesoft-resource_controller (0.6.5) haml (2.2.14) hoe (2.3.3) i18n (0.4.1) jscruggs-metric_fu (1.1.5) json_pure (1.1.9) kramdown (0.12.0) mail (2.2.13, 2.2.6.1) memcache-client (1.8.5) mime-types (1.16) mojombo-chronic (0.3.0) mongrel (1.1.5) monk (0.0.7) nanoc (3.1.5) nanoc3 (3.1.5) nokogiri (1.4.3.1, 1.4.0) open4 (0.9.6) polyglot (0.3.1, 0.2.9) rack (1.2.1, 1.0.1) rack-mount (0.6.13) rack-test (0.5.6) rails (3.0.0, 2.3.4) rails3-generators (0.17.0, 0.14.0) railties (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) rake (0.8.7) relevance-rcov (0.9.2.1) rest-client (1.0.3) rspec (2.3.0, 2.0.0.rc, 1.2.9) rspec-core (2.3.1, 2.0.0.rc) rspec-expectations (2.3.0, 2.0.0.rc) rspec-mocks (2.3.0, 2.0.0.rc) rspec-rails (2.3.1, 2.0.0.rc, 1.2.9) ruby_parser (2.0.4) rubyforge (2.0.3) rubygems-update (1.3.6, 1.3.5) rvm (1.0.13) s4t-utils (1.0.4) safariwatir (0.3.7) sexp_processor (3.0.3) spork (0.7.3) sqlite3-ruby (1.3.1, 1.2.5) sys-uname (0.8.5) term-ansicolor (1.0.4) text-format (1.0.0) text-hyphen (1.0.0) thor (0.14.6, 0.14.3, 0.12.0) treetop (1.4.8, 1.4.2) tzinfo (0.3.23) user-choices (1.1.6) vlad (2.0.0) vlad-git (2.1.0) webrat (0.7.1, 0.6.0, 0.5.3) xml-simple (1.0.12) ZenTest (4.4.2) I am using ruby 1.9.2 and rails 3.0.3 installed using RVM on OSX 10.6 Snow Leopard. I just want to be able to run my specs like I used to. As a separate issue, autotest yields an error about an include for autotest/growl but I installed autotest-growl. Maybe this is a gem issue? I tried doing the same things and get the same error when it comes to using my ubuntu 10.04 server machine though. Gemfile source 'http://rubygems.org' gem 'rails', '3.0.3' # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' group :couch do gem 'couchrest' end group :user_auth do gem 'authlogic' gem "rails3-generators" gem 'facebooker' end group :markup do gem 'haml' gem 'sass' end group :testing do gem 'rspec-rails' gem 'rspec' gem 'webrat' gem 'cucumber' gem 'capybara' gem 'factory_girl' gem 'shoulda' gem 'autotest' end group :server do gem 'unicorn' end # Use unicorn as the web server # gem 'unicorn' # Deploy with Capistrano # gem 'capistrano' # To use debugger # gem 'ruby-debug' # Bundle the extra gems: # gem 'bj' # gem 'nokogiri' # gem 'sqlite3-ruby', :require => 'sqlite3' # gem 'aws-s3', :require => 'aws/s3' # Bundle gems for the local environment. Make sure to # put test-only gems in this group so their generators # and rake tasks are available in development mode: # group :development, :test do # gem 'webrat' # end Gemfile.lock GEM remote: http://rubygems.org/ specs: ZenTest (4.4.2) abstract (1.0.0) actionmailer (3.0.3) actionpack (= 3.0.3) mail (~> 2.2.9) actionpack (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) erubis (~> 2.6.6) i18n (~> 0.4) rack (~> 1.2.1) rack-mount (~> 0.6.13) rack-test (~> 0.5.6) tzinfo (~> 0.3.23) activemodel (3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) i18n (~> 0.4) activerecord (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) arel (~> 2.0.2) tzinfo (~> 0.3.23) activeresource (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) activesupport (3.0.3) arel (2.0.6) authlogic (2.1.6) activesupport autotest (4.4.6) ZenTest (>= 4.4.1) builder (2.1.2) capybara (0.4.0) celerity (>= 0.7.9) culerity (>= 0.2.4) mime-types (>= 1.16) nokogiri (>= 1.3.3) rack (>= 1.0.0) rack-test (>= 0.5.4) selenium-webdriver (>= 0.0.27) xpath (~> 0.1.2) celerity (0.8.6) childprocess (0.1.6) ffi (~> 0.6.3) couchrest (1.0.1) json (>= 1.4.6) mime-types (>= 1.15) rest-client (>= 1.5.1) cucumber (0.10.0) builder (>= 2.1.2) diff-lcs (~> 1.1.2) gherkin (~> 2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) culerity (0.2.13) diff-lcs (1.1.2) erubis (2.6.6) abstract (>= 1.0.0) facebooker (1.0.75) json_pure (>= 1.0.0) factory_girl (1.3.2) ffi (0.6.3) rake (>= 0.8.7) gherkin (2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) haml (3.0.25) i18n (0.5.0) json (1.4.6) json_pure (1.4.6) kgio (2.0.0) mail (2.2.13) activesupport (>= 2.3.6) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) mime-types (1.16) nokogiri (1.4.4) polyglot (0.3.1) rack (1.2.1) rack-mount (0.6.13) rack (>= 1.0.0) rack-test (0.5.6) rack (>= 1.0) rails (3.0.3) actionmailer (= 3.0.3) actionpack (= 3.0.3) activerecord (= 3.0.3) activeresource (= 3.0.3) activesupport (= 3.0.3) bundler (~> 1.0) railties (= 3.0.3) rails3-generators (0.17.0) railties (>= 3.0.0) railties (3.0.3) actionpack (= 3.0.3) activesupport (= 3.0.3) rake (>= 0.8.7) thor (~> 0.14.4) rake (0.8.7) rest-client (1.6.1) mime-types (>= 1.16) rspec (2.3.0) rspec-core (~> 2.3.0) rspec-expectations (~> 2.3.0) rspec-mocks (~> 2.3.0) rspec-core (2.3.1) rspec-expectations (2.3.0) diff-lcs (~> 1.1.2) rspec-mocks (2.3.0) rspec-rails (2.3.1) actionpack (~> 3.0) activesupport (~> 3.0) railties (~> 3.0) rspec (~> 2.3.0) rubyzip (0.9.4) sass (3.1.0.alpha.206) selenium-webdriver (0.1.2) childprocess (~> 0.1.5) ffi (~> 0.6.3) json_pure rubyzip shoulda (2.11.3) sqlite3-ruby (1.3.2) term-ansicolor (1.0.5) thor (0.14.6) treetop (1.4.9) polyglot (>= 0.3.1) tzinfo (0.3.23) unicorn (3.1.0) kgio (~> 2.0.0) rack webrat (0.7.2) nokogiri (>= 1.2.0) rack (>= 1.0) rack-test (>= 0.5.3) xpath (0.1.2) nokogiri (~> 1.3) PLATFORMS ruby DEPENDENCIES authlogic autotest capybara couchrest cucumber facebooker factory_girl haml rails (= 3.0.3) rails3-generators rspec rspec-rails sass shoulda sqlite3-ruby unicorn webrat

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >