Search Results

Search found 42321 results on 1693 pages for 'sql reporting services 05'.

Page 486/1693 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • MYSQL: Limit Word Length for MySql Insert

    - by elmaso
    Hi, every search query is saved in my database, but I want to Limit the Chracterlength for one single word: odisafuoiwerjsdkle -- length too much -- dont write in the database my actually code is: $search = $_GET['q']; if (!($sql = mysql_query ('' . 'SELECT * FROM `history` WHERE `Query`=\'' . $search . '\''))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; } while ($row = mysql_fetch_array ($sql)) { $ID = '' . $row['ID']; } if ($ID == '') { mysql_query ('' . 'INSERT INTO history (Query) values (\'' . $search . '\')'); } if (!($sql = mysql_query ('SELECT * FROM `history` ORDER BY `ID` ASC LIMIT 1'))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; } while ($row = mysql_fetch_array ($sql)) { $first_id = '' . $row['ID']; } if (!($sql = mysql_query ('SELECT * FROM `history`'))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; }

    Read the article

  • WPF and LINQ/SQL - how and where to keep track of changes?

    - by Groky
    I have a WPF application built using the MVVM pattern: My Models come from LINQ to SQL. I use the Repository Pattern to abstract away the DataContext. My ViewModels have a reference to a Model. Setting a property on the ViewModel causes that value to be written through to the Model. As you can see, my data is stored in my Model, and changes are therefore tracked by my DataContext. However, in this question I read: The guidelines from the MSDN documentation on the DataContext class are what I would recommend following: In general, a DataContext instance is designed to last for one "unit of work" however your application defines that term. A DataContext is lightweight and is not expensive to create. A typical LINQ to SQL application creates DataContext instances at method scope or as a member of short-lived classes that represent a logical set of related database operations. How do you track your changes? In your DataContext? In your ViewModel? Elsewhere?

    Read the article

  • Shared Datasets in SQL Server 2008 R2

    This article leverages the examples and concepts explained in the Part I through Part IV of the spatial data series which develops a "BI-Satellite" app. Overview In the spatial data series we ... [Read Full Article]

    Read the article

  • Local service accounts

    - by Jeremy
    Normally we create service accounts in Active Directory, and if we install things like SQL Server, etc, we set the software services we set them to use those service accounts. The service accounts don't have the ability to be used to log into a workstation interactively. For Proof of concepts, we're installing SQL Server and other software on Virtual windows 7 workstations that aren't part of a domain, so we are creating local accounts that will be used by windows services. Is it possible to stop those users from appearing as options on the login screen?

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • php and mysql user tracking and reporting

    - by inertiahz
    Hi, I currently have a table which consists of user information and lesson id; the table layout looks like: ---------------------------------------------------- |employeeID|numVisits|lessonID1|lessonID2|lessonID3| ---------------------------------------------------- |33388 |2 |1 |0 |3 | and a lessons table which contains the information about the lesson: ------------------------------------------------------ |lessonID |cateogry |title |filepath |numberviews| ------------------------------------------------------ |1 |beginner |lesson |file:// |10 | Within the lessonID fields in the user table is an integer which tracks how many times someone has clicked on a lesson. Now what I am trying to do is in a report I have the top 5 people who have visited the site and would like to then be able to drill down into what lessons they have clicked on. Can anyone help with this? Or would restructuring the way the database is be an easier task? Thanks

    Read the article

  • What Is Utility Services ?

    - by query_bug
    Hai, I need some information about Utility Services Layer. Can someone Please help me in getting information on that as I am supposed to give a presentation on Utility Services Layer. Thanks in advance...

    Read the article

  • Invoke wso2 admin services SOAPUI

    - by NGoyal
    I m working on wso2 admin services. I get url as http://localhost:9763/services/AuthenticationAdmin?wsdl for AuthencticationAdmin. Now, when I hit the login operation, with admin,admin,127.0.0.1, I get true as return. ESB console shows logged in. Now, when I hit logout operation, I dont get any response. Also I notice that header of the response does not contain any session ID. 0down voteaccept My ESB is 4.6.0. login request : <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:aut="http://authentication.services.core.carbon.wso2.org"> <soapenv:Header/> <soapenv:Body> <aut:login> <!--Optional:--> <aut:username>admin</aut:username> <!--Optional:--> <aut:password>admin</aut:password> <!--Optional:--> <aut:remoteAddress>127.0.0.1</aut:remoteAddress> </aut:login> </soapenv:Body> </soapenv:Envelope> login response <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns:loginResponse xmlns:ns="http://authentication.services.core.carbon.wso2.org"> <ns:return>true</ns:return> </ns:loginResponse> </soapenv:Body> </soapenv:Envelope> In the response, when I hit login I see, at bottom I only get 6 elements in header as follows : > Date Tue, 25 Jun 2013 14:31:42 GMT > > Transfer-Encoding chunked > > #status# HTTP/1.1 200 OK > > Content-Type text/xml; charset=UTF-8 > > Connection Keep-Alive > > Server WSO2-PassThrough-HTTP Now, I dont get session ID. Can you please point out where m I going wrong? My scenario is that I want to login to WSO2 and then hit some other admin service operation.

    Read the article

  • Solving the SQL Server Multiple Cascade Path Issue with a Trigger

    This tip will look at how you can use triggers to replace the functionality you get from the ON DELETE CASCADE option of a foreign key constraint. Keep your database and application development in syncSQL Connect is a Visual Studio add-in that brings your databases into your solution. It then makes it easy to keep your database in sync, and commit to your existing source control system. Find out more.

    Read the article

  • Windows 7: The audio service is not running?

    - by oscilatingcretin
    Why isn't my audio service running? It's so random. Sometimes I am able to change my PC volumn, other times I am not. I've restarted the following services: Remote Procedure Call (RPC) Multimedia Class Scheduler Windows Audio Endpoint Builder I think I started noticing this after I went mucking around in msconfig looking for ways to improve PC performance. I know I shut off some services, but that was over 6 months ago today and I am finally getting tired of it happening.

    Read the article

  • SQL Server: Writing CASE expressions properly when NULLs are involved

    - by Mladen Prajdic
    We’ve all written a CASE expression (yes, it’s an expression and not a statement) or two every now and then. But did you know there are actually 2 formats you can write the CASE expression in? This actually bit me when I was trying to add some new functionality to an old stored procedure. In some rare cases the stored procedure just didn’t work correctly. After a quick look it turned out to be a CASE expression problem when dealing with NULLS. In the first format we make simple “equals to” comparisons to a value: SELECT CASE <value> WHEN <equals this value> THEN <return this> WHEN <equals this value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Second format is much more flexible since it allows for complex conditions. USE THIS ONE! SELECT CASE WHEN <value> <compared to> <value> THEN <return this> WHEN <value> <compared to> <value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Now that we know both formats and you know which to use (the second one if that hasn’t been clear enough) here’s an example how the first format WILL make your evaluation logic WRONG. Run the following code for different values of @i. Just comment out any 2 out of 3 “SELECT @i =” statements. DECLARE @i INTSELECT  @i = -1 -- first resultSELECT  @i = 55 -- second resultSELECT  @i = NULL -- third resultSELECT @i AS OriginalValue, -- first CASE format. DON'T USE THIS! CASE @i WHEN -1 THEN '-1' WHEN NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS DontUseThisCaseFormatValue, -- second CASE format. USE THIS! CASE WHEN @i = -1 THEN '-1' WHEN @i IS NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS UseThisCaseFormatValue When the value of @i is –1 everything works as expected, since both formats go into the –1 WHEN branch. When the value of @i is 55 everything again works as expected, since both formats go into the ELSE branch. When the value of @i is NULL the problems become evident. The first format doesn’t go into the WHEN NULL branch because it makes an equality comparison between two NULLs. Because a NULL is an unknown value: NULL = NULL is false. That is why the first format goes into the ELSE Branch but the second format correctly handles the proper IS NULL comparison.   Please use the second more explicit format. Your future self will be very grateful to you when he doesn’t have to discover these kinds of bugs.

    Read the article

  • Are ternary operators not valid for linq-to-sql queries?

    - by KallDrexx
    I am trying to display a nullable date time in my JSON response. In my MVC Controller I am running the following query: var requests = (from r in _context.TestRequests where r.scheduled_time == null && r.TestRequestRuns.Count > 0 select new { id = r.id, name = r.name, start = DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = r.TestRequestRuns.First().end_dt.HasValue ? DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value) : string.Empty }); When I run requests.ToArray() I get the following exception: Could not translate expression ' Table(TestRequest) .Where(r => ((r.scheduled_time == null) AndAlso (r.TestRequestRuns.Count > 0))) .Select(r => new <>f__AnonymousType18`4(id = r.id, name = r.name, start = value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().start_dt), end = IIF(r.TestRequestRuns.First().end_dt.HasValue, value(QAWebTools.Controllers.TestRequestsController). DateAndTimeDisplayString(r.TestRequestRuns.First().end_dt.Value), Invoke(value(System.Func`1[System.String])))))' into SQL and could not treat it as a local expression. If I comment out the end = line, everything seems to run correctly, so it doesn't seem to be the use of my local DateAndTimeDisplayString method, so the only thing I can think of is Linq to Sql doesn't like Ternary operators? I think I've used ternary operators before, but I can't remember if I did it in this code base or another code base (that uses EF4 instead of L2S). Is this true, or am I missing some other issue?

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • Chapter 7–Enforced Data Protection

    - by drsql
    As the book progresses, I find myself veering from the original stated outline quite a bit, because as I teach about this more (and I am teaching a daylong db design class in August at http://www.sqlsolstice.com/ … shameless plug, but it is on topic :) I start to find that a given order works better. Originally I had slated myself to talk more about modeling here for three chapters, then get back to the more implementation topics to finish out the book, but now I am going to keep plugging through...(read more)

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Loading city/state from SQL Server to Google Maps?

    - by knawlejj
    I'm trying to make a small application that takes a city & state and geocodes that address to a lat/long location. Right now I am utilizing Google Map's API, ColdFusion, and SQL Server. Basically the city and state fields are in a database table and I want to take those locations and get marker put on a Google Map showing where they are. This is my code to do the geocoding, and viewing the source of the page shows that it is correctly looping through my query and placing a location ("Omaha, NE") in the address field, but no marker, or map for that matter, is showing up on the page: function codeAddress() { <cfloop query="GetLocations"> var address = document.getElementById(<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>).value; if (geocoder) { geocoder.geocode( {<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>: address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: <cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput> }); } else { alert("Geocode was not successful for the following reason: " + status); } }); } </cfloop> } And here is the code to initialize the map: var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.4167,-90.4290); var myOptions = { zoom: 5, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } var marker = new google.maps.Marker({ position: latlng, map: map, title: "Test" }); map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } I do have a map working that uses lat/long that was hard coded into the database table, but I want to be able to just use the city/state and convert that to a lat/long. Any suggestions or direction? Storing the lat/long in the database is also possible, but I don't know how to do that within SQL.

    Read the article

  • MSBuild syntax for deleting files/directories and reporting what was deleted

    - by Maslow
    Vs2010 .net 4.0 targeted project if that affects the answers at all. I want to delete the bin and obj directories and output a message for the path of what was deleted. <Target Name="CleanOutputs" Condition="'$(MvcBuildViews)'=='true'"> <Message Text="Cleaning Outputs" Importance="high"/> <RemoveDir Directories="$(OutputPath);obj" RemovedDirectories="@(removed)" /> <Message Text="Removed: %(removed.FullPath)" Importance="high"/> <Message Text=" "/> <!--<RemoveDir Directories="obj" />--> <MakeDir Condition="!Exists('$(OutputPath)')" Directories="$(OutputPath)" /> </Target> Is what I have, but the Removed: message never shows.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • Asp.Net error reporting with stacktrace

    - by Helo
    I have an Asp.Net web site set up to log errors using Log4Net (in global.asax) and redirect users to a custom error page (set up in web.config with: ). On this error page the users have the opportunity to write about what they did when the error occurred and post the description back to us to help fix the problem. My question is this: How do i connect the error stacktrace to the users error report? It appears that .Net has handled the error when it was written to the log file and the custom error page has no information about the stacktrace. If only I could see the stacktrace from the custom error page code-behind my problem would be solved.

    Read the article

  • Using Coalesce

    - by Derek Dieter
    The coalesce function is used to find the first non-null value. The function takes limitless number of parameters in order to evaluate the first non null. If all the parameters are null, then COALESCE will also return a NULL value.-- hard coded example SELECT MyValue = COALESCE(NULL, NULL, 'abc', 123)The example above returns back [...]

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • ASP.NET MVC & Windsor.Castle: working with HttpContext-dependent services

    - by Igor Brejc
    I have several dependency injection services which are dependent on stuff like HTTP context. Right now I'm configuring them as singletons the Windsor container in the Application_Start handler, which is obviously a problem for such services. What is the best way to handle this? I'm considering making them transient and then releasing them after each HTTP request. But what is the best way/place to inject the HTTP context into them? Controller factory or somewhere else?

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • Mavericks: Safari does not login in into web services

    - by Roberto
    Since when I upgraded ML to Mavericks Safari is no longer able to log me into Facebook. When I go to the login page it suggests me the correct credentials, I hit the Login button, the page refreshes but nothing happens, like if the credentials where empty. Firefox works perfectly, I even logged out and back in to make sure the credentials are the same that Safari suggests, and so they are. Needless to say for a different user on the same Mavericks Safari logs in correctly. The same happens with most web pages that need a login, web mails for instances, I have tow accounts on different webmail providers and none of them works. Of course using the same mail services with POP3 works fine. Even on this very site I cannot post a thing with Safari, I'm going to switch to Firefox to be able to post this question. Again, Firefox or a different user are OK. Do you have any idea/suggestion?

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >