Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 174/3896 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • How do I backup my customer's data?

    - by marcamillion
    If you run a SaaS app, or work on one, I would love to hear from you. Where the safety and security of your customer's data is paramount, how do you secure it and back it up? I would love to know your main host (e.g. Heroku, Engine Yard, Rackspace, MediaTemple, etc.) and who you use for your backup. Be as detailed as possible - e.g. a quick overview of your service and the data you store (images for instance), what happens with the images when the user uploads them (e.g. they go to your Linode VPS, and posted to the site for them to see - then they are automatically sent to AWS or wherever, then once a week they are backed up to tape by the managed hosting provider, and you also back them up to your house/office). If you could also give some idea as to what the unit cost (per GB/per user/per month) of storage is - on average, I would really appreciate that. Getting ready to launch my app, and I would love to get some more perspective on the nitty gritty details involved. Thanks!

    Read the article

  • What conventions or frameworks exist for MVVM in Perl?

    - by Will Sheppard
    We're using Catalyst to render lots of webforms in what will become a large application. I don't like the way all the form data is confusingly into a big hash in the Controller, before being passed to the template. It seems jumbled up and messy for the template. I'm sure there are real disadvantages that I haven't described properly... Are there? One solution is to just decide on a convention for the hash, e.g.: { defaults => { type => ['a', 'b', 'c'] }, input => { type => 'a' }, output => { message => "2 widgets found of type a", widgets => [ 'foo', 'bar' ] } } Another way is to store the page/form data as attributes in a class (a ViewModel?), and pass a whole object to the template, which it could use like this: <p class="message">[% model.message %]<p> [% FOREACH widget IN model.widgets %] Which way is more flexible for large applications? Are there any other solutions or existing Catalyst-compatible frameworks?

    Read the article

  • Network connection fails when downloading data

    - by Guus
    In the past few weeks, I've noticed my network connection becoming unstable while downloading files. I'm not sure how to diagnose this issue. I've found that my network connections appear to be temporarily unresponsive (for periods up to a minute or so) while I'm downloading a file (that's large enough to not be downloaded instantly). The problem occurs when downloading data through a webbrowser, but also when using SCP to download data from a remote location. During the period in which network connections are unresponsive, every resource that I try to access over the network is unavailable. This includes: The download itself (SCP reports a 'stalled' download) Web pages (won't load - browsers report 'resource unavailable') SSH sessions (CLI freezes) VPN connections (connections terminate) IM client connections (client starts reconnection attempts) ... (everything is pretty much dead) I've noticed that during such a period, I cannot even access the (web-based) administrative console of the router on my LAN at home (although it remains reachable for other devices). The problem occurs when connected to my home network, but also when I'm in the office. Other devices than my laptop are not affected. Given the above two characteristics I assume the problem cause lies within my laptop, not the network infrastructure. I'm running Ubuntu 11.10. Apart from applying automatic updates from Ubuntu, I can't think of a change that I applied to the OS that could have started the problems. I'm absolutely positive that this issue did not occur up to a few weeks ago (as it's very noticeable, and only started to annoy me in the last few days). When the problem occurs, applications that make use of a network connection fail visibly (I get popups telling me that a VPN connection is broken, for instance). The network manager does not report any issues related to my wifi-connection though. Help?

    Read the article

  • Data Center Modernization: Harness the power of Oracle Exalogic and Exadata with PeopleSoft

    - by Michelle Kimihira
    Author: Latha Krishnaswamy, Senior Manager, Exalogic Product Management   Allegis Group - a Hanover, MD-based global staffing company is the largest privately held staffing company in the United States with more than 10,000 internal employees and 90,000 contract employees. Allegis Group is a $6+ billion company, offering a full range of specialized staffing and recruiting solutions to clients in a wide range of industries.   The company processes about 133,000 paychecks per week, every week of the year. With 300 offices around the world and the hefty task of managing HR and payroll, the PeopleSoft system at Allegis  is a mission-critical application. The firm is in the midst of a data center modernization initiative. Part of that project meant moving the company's PeopleSoft applications (Financials and HR Modules as well as Custom Time & Expense module) to a converged infrastructure.     The company ran a proof of concept with four different converged architectures before deciding upon Exadata and Exalogic as the platform of choice.   Performance combined with High availability for running mission-critical payroll processes drove this decision.  During the testing on Exadata and Exalogic Allegis applied a particular (11-F) tax update in production environment. What job ran for roughly six hours completed in less than 1.5 hours. With additional tuning the second run of the Tax update 11-F reduced to 33 minutes - a 90% improvement!     Not only that, the move will help the company save money on middleware by consolidating use of Oracle licensing in a single platform.   Summary With a modern data center powered by Exalogic and Exadata to run mission-critical PeopleSoft HR and Financial Applications, Allegis is positioned to manage business growth and improve employee productivity. PeopleSoft applications run on engineered systems platform minimizing hardware and software integration risks. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • How to prevent duplicate data access methods that retrieve similar data?

    - by Ronald Wildenberg
    In almost every project I work on with a team, the same problem seems to creep in. Someone writes UI code that needs data and writes a data access method: AssetDto GetAssetById(int assetId) A week later someone else is working on another part of the application and also needs an AssetDto but now including 'approvers' and writes the following: AssetDto GetAssetWithApproversById(int assetId) A month later someone needs an asset but now including the 'questions' (or the 'owners' or the 'running requests', etc): AssetDto GetAssetWithQuestionsById(int assetId) AssetDto GetAssetWithOwnersById(int assetId) AssetDto GetAssetWithRunningRequestsById(int assetId) And it gets even worse when methods like GetAssetWithOwnerAndQuestionsById start to appear. You see the pattern that emerges: an object is attached to a large object graph and you need different parts of this graph in different locations. Of course, I'd like to prevent having a large number of methods that do almost the same. Is it simply a matter of team discipline or is there some pattern I can use to prevent this? In some cases it might make sense to have separate methods, i.e. getting an asset with running requests may be expensive so I do not want to include these all the time. How to handle such cases?

    Read the article

  • USB and CD data cannot be read or mounted in 12.10

    - by aravind4j
    I'm using Ubuntu 12.10. I wrote a data disk using Brasero Disk Burner, but the system cannot read the CD. I tried to write the same data into my HP v220w pen drive but now there is an error in that too. It shows the following: Error mounting /dev/sdb at /media/aravind4j/Aravind4j: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdb" "/media/aravind4j/Aravind4j"' exited with non-zero exit status 13: $MFTMirr does not match $MFT (record 0). Failed to mount '/dev/sdb': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. (udisks-error-quark, 0) I used a NTFS file system for the pen drive as you can recognize from the above statement. I would like to recover the files so please help me recover the files without formatting the drive.

    Read the article

  • Implementing a ILogger interface to log data

    - by Jon
    I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List<string> PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock<IFileLogger>(); mock.Setup(x => x.Log(Is.Any<string>).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times("1,2,3"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • How to manage Areas/Levels in an RPG?

    - by Hexlan
    I'm working on an RPG and I'm trying to figure out how to manage the different levels/areas in the game. Currently I create a new state (source file) for every area, defining its unique aspects. My concern is that as the game grows the number of class files will become unmanageable with all the towns, houses, shops, dungeons, etc. that I need to keep track of. I would also prefer to separate my levels from the source code because non-programmer members of the team will be creating levels, and I would like the engine to be as free from game specific code as possible. I'm thinking of creating a class that provides all the functions that will be the same between all the levels/areas with a unique member variable that can be used to look up level specifics from data. This way I only need to define level/area once in the code, but can create multiple instances each with its own unique aspects provided by data. Is this a good way to go about solving the issue? Is there a better way to handle a growing number of levels?

    Read the article

  • Unified data source for k2 installed Joomla websites

    - by Özkan ÖZLÜ
    I am responsible for a few web sites of my organization. I use Joomla! 2.5.9 for those web sites. They all are running at the same server. I use K2 component for content managing. I have a general website in which shows all the staff information at the 'Staff' page. Also some of those people and their contents are shown in another department's website. So, there are databases for each web site. For example: In the general website (let's say general.org), when I click on the 'Staff' menu item, page shows all of the people work at my organization. Also they work at different departments. In another web site (eg: education.general.org) when I click on the 'Staff' menu item, it shows the people work at education department. But for each web site, I have different user accounts which means a modification in one of them does not affect the other one. If the one of the education staff tries to change his profile picture on the education web site, he also has to do it on the general web site. And sometimes one person might be working at two departments. Thus he has to edit three times of his data. Is it possible to merge the records for all websites? In other words, I want everyone to insert/update their data on the general web site, and the other web sites will be updated automatically.

    Read the article

  • Storing Tiled Level Data in J2ME game

    - by Alex
    I'm developing a J2ME game which uses tiled backgrounds for the levels. My question is how do I store this tile information in my game. At the moment it is stored as an array; with each number representing a different tile from the tile-sheet. This works well enough, however I don't like the fact that it is 'hard-coded' into the game because (at least in my opinion) it is harder to edit the levels, or design new ones. I was also thinking that it would be difficult if you wanted to add a 'level pack', I'm not sure on how this would be achieved though; it's not something I was planning on doing, I'm just curious. I was wondering if there was a way I could store level data in some external file and then load this in to the game. The problem is I don't know what the limitations are for J2ME regarding file I/O, can it read in any file like Java? I am aware of the RMS, but from my experience I don't think this would work (unless I am mistaken). Also, would loading the data in this way be too big a performance hit? Or is there another way I can achieve what I am trying to do. As I said, the way I have it at the moment works fine, and if this is the only viable option then it will suffice.

    Read the article

  • Triggering Data Changes in N-Tier

    - by Ryan Kinal
    I've been studying n-tier architectures as of late, particularly in VB.NET with Entity Framework and/or LINQ to SQL. I understand the basic concepts, but have been wondering about best practices in regard to triggering CRUD-type operations from user input/action. So, the arcitecture looks something like the following: [presentation layer] - [business layer] - [data layer] - (database) Getting information from the database into the presentation layer is simple and abstracted. It's just a matter of instantiating a new object from the business layer, which in turn uses the data layer to get at the correct information. However, saving (updating and inserting), and deleting seem to require particular APIs on the relevant business objects. I have to assume this is standard practice, unless a business object will save itself on various operations (which seems inefficient), or on disposal (which seems like it just wouldn't work, or may be unwieldy and unreliable). Should my "savable" business objects all implement a particular "ISavable" or "IDatabaseObject" interface? Is this a recognized (anti-)pattern? Are there other, better patterns I should be using that I'm just unaware of? The TLDR question, I suppose, is How does the presentation layer trigger database changes?

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Help deciding on language for a complex desktop - web application

    - by user967834
    I'm about to start working on a fairly complex project needing a desktop GUI as well as a web interface and I need to decide on a language(s) to use. This is from an electrical engineering/robotics background. These are the requirements: Program will have to read data from multiple sensors and inputs (motion sensor, temperature sensor, capacitive sensor, infrared, magnetic sensors, etc) through a port on a computer - so either through USB or ethernet. Program will have to be able to send control signals based on this input. Program will have to continuously monitor all input signals at all times - so realtime data. Program will require authentication. Program will need to be controllable from a web interface from anywhere via logging in to a website. Web interface will also need to have realtime feedback once authenticated. What language do you think would best accomplish this? I was thinking maybe saving everything into a database which can be accessed by both the desktop and web app? And would Python be able to do all of this? Or something like a remote desktop app? I know this is a complex project but let's assume I can learn any language. Has anyone done something like this and if so how did you accomplish it?

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Clustering Strings on the basis of Common Substrings

    - by pk188
    I have around 10000+ strings and have to identify and group all the strings which looks similar(I base the similarity on the number of common words between any two give strings). The more number of common words, more similar the strings would be. For instance: How to make another layer from an existing layer Unable to edit data on the network drive Existing layers in the desktop Assistance with network drive In this case, the strings 1 and 3 are similar with common words Existing, Layer and 2 and 4 are similar with common words Network Drive(eliminating stop word) The steps I'm following are: Iterate through the data set Do a row by row comparison Find the common words between the strings Form a cluster where number of common words is greater than or equal to 2(eliminating stop words) If number of common words<2, put the string in a new cluster. Assign the rows either to the existing clusters or form a new one depending upon the common words Continue until all the strings are processed I am implementing the project in C#, and have got till step 3. However, I'm not sure how to proceed with the clustering. I have researched a lot about string clustering but could not find any solution that fits my problem. Your inputs would be highly appreciated.

    Read the article

  • Why is my USB data transfer so slow?

    - by Dave M G
    Whenever I do any kind of file transfer using USB, whether to a USB stick, or with my Android phone, or anything else, it is ridiculously slow. It says 59.8 KB/sec, which would be an awesome speed if this were 1991 and I was using a modem to dial up to my local BBS. Surely USB technology is better than that...? 37 seconds to move less data than the equivelent of 1 MP3 file? Also, regardless of what it says about speed and time, the reality is much, much slower. I routinely see it say something like "37 seconds left" and have to wait for minutes. Sometimes, if I want to move large amounts of files, it can say it will take 8 hours or more. Is this normal? My computer may not be the most awesome on the market, and about a year old, but it's an i5 with 4GB RAM and modern components, so surely this isn't the hardware's fault. What can I do to get better USB data transfer performance? Also, I did look at this question, but my newbie eyes don't see anything that look like an actual solution, just a lot of discussion about what transfer rates could or should be. Update: As requested in the comments, I've generated a whole bunch of output from the command line, and put it on Ubuntu Pastebin. Please see it here.

    Read the article

  • Storing large array of tiles, but allowing easy access to data

    - by Cyral
    I've been thinking about this for a while. I have a 2D tile bases platformer in XNA with a large array of tile data, I've been running into memory problems with large maps. (I will add chunks soon!) Currently, Each tile contains an Item along with other properties like how its rotated, if it has forground / background, etc. An Item is static and has properties like the name, tooltip, type of item, how much light it emits, the collision it does to player, etc. Examples: public class Item { public static List<Item> Items; public Collision blockCollisionType; public string nameOfItem; public bool someOtherVariable,etc,etc public static Item Air public static Item Stone; public static Item Dirt; static Item() { Items = new List<Item>() { (Stone = new Item() { nameOfItem = "Stone", blockCollisionType = Collision.Solid, }), (Air = new Item() { nameOfItem = "Air", blockCollisionType = Collision.Passable, }), }; } } Would be an Item, The array of Tiles would contain a Tile for each point, public class Tile { public Item item; //What type it is public bool onBackground; public int someOtherVariables,etc,etc } Now, Most would probably use an enum, or a form of ID to identify blocks. Well my system is really nice just to find out about an item. I can simply do tiles[x,y].item.Name To get the name for example. I realized my Item property of the tile is over 1000 Bytes! Wow! What I'm looking for is a way to use an ID (Int or byte depending on how many items) instead of an Item but still have a method for retreiving data about the type of item a tile contains.

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • Why change from WPF to Silverlight 4?

    - by stiank81
    I'm working on an application we made WPF instead of Silverlight as we wanted a full blown desktop application with the whole unique feeling and advantages that gives. However, with the announcement of Silverlight 4 I hear there is a buzz about Silverlight mostly being the preferred choice also for desktop applications. So; why should I consider moving my WPF application to Silverlight 4 - given that I still want a desktop application?

    Read the article

  • Install Shield 2009 Premier, Uninstall doesn't close the process/gui

    - by Samir
    My application (developed using C#.net) is open now i uninstall, InstallShield gives message stating the application is already open and whether really want to close the application. Selection 'Ignore' continues uninstall. Some files and the exe of the application are not closed. How to close them by installshield on uninstall. Or there are some properties I have to set. I know adding a custom action at uninstall i can kill the process, but shouldn't installshield do it?

    Read the article

  • Command line tool in python in a fixed root directory ...

    - by koleto
    I would like to install my python application as a command line tool that should work entirelly inside the install directory (for example C:\Python26\Lib\site-packages\application) The problem is I would like to reffer in runtime to the submodules and resources from within the application directory three. If I install the app with [console_scripts] option the default path is the current directory. Is there a elegant way to keep the current execution path of the application to the site-packages directory? Thanks

    Read the article

  • Error: Expected specifier-qualifier-list before ... in XCode Core Data

    - by Doron Katz
    Hi guys, I keep on getting this error "error: expected specifier-qualifier-list for core data code im working with, in the app delegate. Now when i get this error and about 40 other errors relating to managedobjectcontext etc, i thought maybe the library needs to be imported. Now i havent done this before, but i went to Frameworks group and add existing frameworks and it added CoreData.framework. I re-build and it still came up with the error. Do I need to import anything in the headers explicitly or is there some other step i need to do? Thanks

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • How do You Get a Specific Value From a System.Data.DataTable Object?

    - by Giffyguy
    I'm a low-level algorithm programmer, and databases are not really my thing - so this'll be a n00b question if ever there was one. I'm running a simple SELECT query through our development team's DAO. The DAO returns a System.Data.DataTable object containing the results of the query. This is all working fine so far. The problem I have run into now: I need to pull a value out of one of the fields of the first row in the resulting DataTable - and I have no idea where to even start. Microsoft is so confusing about this! Arrrg! Any advice would be appreciated. I'm not providing any code samples, because I believe that context is unnecessary here. I'm assuming that all DataTable objects work the same way, no matter how you run your queries - and therefore any additional information would just make this more confusing for everyone.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >