Search Results

Search found 84007 results on 3361 pages for 'sql system table'.

Page 266/3361 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • How to understand the programming of an operating system

    - by piemesons
    Hello, I want to learn the operating system. How it works. I don't want to make my own operating system. I just want to learn how it works. As I can find out source code of any open source OS. But how to start. Like starting from the first elementary kernel (whatever it is). Somebody suggested I try to implement Linux from scratch etc. Please guide me in a proper way. I want to know about the proper path to follow. I am ready to invest three to four years just to understand the basics. I have good fundamentals of C, C++, PHP, OOP and compiler design.

    Read the article

  • DotNet Get User Operating System (HTTP_USER_AGENT)

    - by rockinthesixstring
    I'm looking at building an exhaustive function that returns a friendly name for the Users Operating System. I think I have most of the Windows stuff down, but I'm not sure about Linux, OSX, and others. Does anyone know where I can find an exhaustive list of HTTP_USER_AGENT's 'Gets the users operating system Public Shared Function GetUserOS() As String Dim strAgent As String = HttpContext.Current.Request.ServerVariables("HTTP_USER_AGENT") 'Windows OS's If InStr(strAgent, "Windows NT 6.1") Then : Return "Windows 7" ElseIf InStr(strAgent, "Windows NT 6.0") Then : Return "Windows Vista" ElseIf InStr(strAgent, "Windows NT 5.2") Then : Return "Windows Server 2003" ElseIf InStr(strAgent, "Windows NT 5.1") Then : Return "Windows XP" ElseIf InStr(strAgent, "Windows NT 5.0") Then : Return "Windows 2000" ElseIf InStr(strAgent, "Windows 98") Then : Return "Windows 98" ElseIf InStr(strAgent, "Windows 95") Then : Return "Windows 95" 'Mac OS's ElseIf InStr(strAgent, "Mac OS X") Then : Return "Mac OS X" 'Linux OS's ElseIf InStr(strAgent, "Linux") Then : Return "Linux" Else : Return "Unknown" End If End Function 'GetUserOS

    Read the article

  • ORDERBY "human" alphabetical order using SQL string manipulation

    - by supertrue
    I have a table of posts with titles that are in "human" alphabetical order but not in computer alphabetical order. These are in two flavors, numerical and alphabetical: Numerical: Figure 1.9, Figure 1.10, Figure 1.11... Alphabetical: Figure 1A ... Figure 1Z ... Figure 1AA If I orderby title, the result is that 1.10-1.19 come between 1.1 and 1.2, and 1AA-1AZ come between 1A and 1B. But this is not what I want; I want "human" alphabetical order, in which 1.10 comes after 1.9 and 1AA comes after 1Z. I am wondering if there's still a way in SQL to get the order that I want using string manipulation (or something else I haven't thought of). I am not an expert in SQL, so I don't know if this is possible, but if there were a way to do conditional replacement, then it seems I could impose the order I want by doing this: delete the period (which can be done with replace, right?) if the remaining figure number is more than three characters, add a 0 (zero) after the first character. This would seem to give me the outcome I want: 1.9 would become 109, which comes before 110; 1Z would become 10Z, which comes before 1AA. But can it be done in SQL? If so, what would the syntax be? Note that I don't want to modify the data itself—just to output the results of the query in the order described. This is in the context of a Wordpress installation, but I think the question is more suitably an SQL question because various things (such as pagination) depend on the ordering happening at the MySQL query stage, rather than in PHP.

    Read the article

  • SQL Server insert performance with and without primary key

    - by Eric
    Summary: I have a table populated via the following: insert into the_table (...) select ... from some_other_table Running the above query with no primary key on the_table is ~15x faster than running it with a primary key, and I don't understand why. The details: I think this is best explained through code examples. I have a table: create table the_table ( a int not null, b smallint not null, c tinyint not null ); If I add a primary key, this insert query is terribly slow: alter table the_table add constraint PK_the_table primary key(a, b); -- Inserting ~880,000 rows insert into the_table (a,b,c) select a,b,c from some_view; Without the primary key, the same insert query is about 15x faster. However, after populating the_table without a primary key, I can add the primary key constraint and that only takes a few seconds. This one really makes no sense to me. More info: The estimated execution plan shows 0% total query time spent on the clustered index insert SQL Server 2008 R2 Developer edition, 10.50.1600 Any ideas?

    Read the article

  • UPDATE Table SET Field

    - by davlyo
    This is my Very first Post! Bear with me. I have an Update Statement that I am trying to understand how SQL Server handles it. UPDATE a SET a.vField3 = b.vField3 FROM tableName a INNER JOIN tableName b ON a.vField1 = b.vField1 AND b.nField2 = a.nField2 – 1 This is my query in its simplest form. vField1 is a Varchar nField2 is an int (autonumber) vField3 is a Varchar I have left the WHERE clause out so understand there is logic that otherwise makes this a nessessity. Say vField1 is a Customer Number and that Customer has 3 records The value in nField2 is 1, 2, and 3 consecutively. vField3 is a Status When the Update comes to a.nField2 = 1 there is no a.nField2 -1 so it continues When the Update comes to a.nField2 = 2, b.nField2 = 1 When the Update comes to a.nField2 = 3, b.nField2 = 2 So when the Update is on a.nField2 = 2, alias b reflects what is on the line prior (b.nField2 = 1) And it SETs the Varchar Value of a.vField3 = b.vField3 When the Update is on a.nField2 = 3, alias b reflects what is on the line prior (b.nField2 = 2) And it (should) SET the Varchar Value of a.vField3 = b.vField3 When the process is complete –the Second of three records looks as expected –hence the value in vField3 of the second record reflects the value in vField3 from the First record However, vField3 of the Third record does not reflect the value in vField3 from the Second record. I think this demonstrates that SQL Server may be producing a transaction of some sort and then an update. Question: How can I get the DB to Update after each transaction so I can reference the values generated by each transaction?

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • FreeText COUNT query on multiple tables is super slow

    - by Eric P
    I have two tables: **Product** ID Name SKU **Brand** ID Name Product table has about 120K records Brand table has 30K records I need to find count of all the products with name and brand matching a specific keyword. I use freetext 'contains' like this: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE (contains(Product.Name, 'pants') or contains(Brand.Name, 'pants')) This query takes about 17 secs. I rebuilt the FreeText index before running this query. If I only check for Product.Name. They query is less then 1 sec. Same, if I only check the Brand.Name. The issue occurs if I use OR condition. If I switch query to use LIKE: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE Product.Name LIKE '%pants%' or Brand.Name LIKE '%pants%' It takes 1 secs. I read on MSDN that: http://msdn.microsoft.com/en-us/library/ms187787.aspx To search on multiple tables, use a joined table in your FROM clause to search on a result set that is the product of two or more tables. So I added an INNER JOINED table to FROM: SELECT count(*) FROM (select Product.Name ProductName, Product.SKU ProductSKU, Brand.Name as BrandName FROM Product inner join Brand on product.BrandID = Brand.ID) as TempTable WHERE contains(TempTable.ProductName, 'pants') or contains(TempTable.BrandName, 'pants') This results in error: Cannot use a CONTAINS or FREETEXT predicate on column 'ProductName' because it is not full-text indexed. So the question is - why OR condition could be causing such as slow query?

    Read the article

  • Debugging a local SQL Server 2008 stored proceedure from Visual studio 2008

    - by Ricibob
    Hi all, There are a few posts about this question around but most concern remote debugging - here everything is on same machine. Visual studio 2008. I have a data connection to localhost SQL Server 2008 using Windows authentication with an admin account - this account is a member of sysadmin in SQL server. I double click stored proc and add a break point. I right click and select "Step into stored proceedure". I get the loathed and feared "Canceled by user." in output window. Does anyone know whats doing? Further - right clicking on the data connection shows both "Application debugging" and "Allow SQL/CLR Debugging". I have checked "Enable SQL Server debugging" on the properties of the C# client app. If run that in debug and try to step in to stored proc code "command.ExecuteNonQuery()" then the break points in the stored proc become disabled and are not pulled. After doing this once the right click on stored proc "Execute" and "Step into stored proceedure" are greyed/disabled. To get them back I have to restart visual studio (refresh connection doesn't do it). Any help much appreciated!! Cheers.

    Read the article

  • SQL Server Connection Timeout C#

    - by Termin8tor
    First off I'd like to let everyone know I have searched my particular problem and can't seem to find what's causing my problem. I have an SQL Server 2008 instance running on a network machine and a client I have written connecting to it. To connect I have a small segment of code that establishes a connection to an sql server 2008 instance and returns a DataTable populated with the results of whatever query I run against the server, all pretty standard stuff really. Anyway the issue is, whenever I open my program and call this method, upon the first call to my method, regardless as to what I've set my Connection Timeout value as in the connection string, it takes about 15 seconds and then times out. Bizarrely though the second or third call I make to the method will work without a problem. I have opened up the ports for SQL Server on the server machine as outlined in this article: How to Open firewall ports for SQL Server and verified that it is correctly configured. Can anyone see a particular problem in my code? string _connectionString = "Server=" + @Properties.Settings.Default.sqlServer + "; Initial Catalog=" + @Properties.Settings.Default.sqlInitialCatalog + ";User Id=" + @Properties.Settings.Default.sqlUsername + ";Password=" + @Properties.Settings.Default.sqlPassword + "; Connection Timeout=1"; private DataTable ExecuteSqlStatement(string command) { using (SqlConnection conn = new SqlConnection(_connectionString)) { try { conn.Open(); using (SqlDataAdapter adaptor = new SqlDataAdapter(command, conn)) { DataTable table = new DataTable(); adaptor.Fill(table); return table; } } catch (SqlException e) { throw e; } } } The SqlException that is caught at my catch is : "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." This occurs at the conn.Open(); line in the code snippet I have included. If anyone has any ideas that'd be great!

    Read the article

  • SQL Server 2008 - Update a temporary table

    - by user336786
    Hello, I have stored procedure in which I am trying to retrieve the last ticket completed by each user listed in a comma-delimited string of usernames. The user may not have a ticket associated with them, in this case I know that i just need to return null. The two tables that I am working with are defined as follows: User ---- UserName, FirstName, LastName Ticket ------ ID, CompletionDateTime, AssignedTo, AssignmentDate, StatusID TicketStatus ------------ ID, Comments I have created a stored procedure in which I am trying to return the last completed ticket for a comma-delimited list of usernames. Each record needs to include the comments associated with it. Currently, I'm trying the following: CREATE TABLE #Tickets ( [UserName] nvarchar(256), [FirstName] nvarchar(256), [LastName] nvarchar(256), [TicketID] int, [DateCompleted] datetime, [Comments] text ) -- This variable is actually passed into the procedure DECLARE @userList NVARCHAR(max) SET @userList='user1,user2,user2' -- Obtain the user information for each user INSERT INTO #Tickets ( [UserName], [FirstName], [LastName] ) SELECT u.[UserName], u.[FirstName], u.[LastName] FROM User u INNER JOIN dbo.ConvertCsvToTable(@userList) l ON u.UserName=l.item At this point, I have the username, first and last name for each user passed in. However, I do not know how to actually get the last ticket completed for each of these users. How do I do this? I believe I should be updating the temp table I have created. At the same time, id do not know how to get just the last record in an update statement. Thank you!

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Getting input in system() function (Mac)

    - by Alex
    #include <iostream> using namespace std; int main() { short int enterVal; cout << "enter a number to say: " << endl; cin >> enterVal; system("say "%d"") << enterVal; return 0; } Is what I am currently trying. I want the user to enter a number and the system() function says it basically. The code above has an error which says " 'd' was not declared in this scope ". Thanks in advance.

    Read the article

  • Data mixing SQL Server

    - by Pythonizo
    I have three tables and a range of two dates: Services ServicesClients ServicesClientsDone @StartDate @EndDate Services: ID | Name 1 | Supervisor 2 | Monitor 3 | Manufacturer ServicesClients: IDServiceClient | IDClient | IDService 1 | 1 | 1 2 | 1 | 2 3 | 2 | 2 4 | 2 | 3 ServicesClientsDone: IDServiceClient | Period 1 | 201208 3 | 201210 Period = YYYYMM I need to insert into ServicesClientsDone the months range from @StartDate up @EndDate. I have also a temporary table (#Periods) with the following list: Period 201208 201209 201210 The query I need is to give me back the following list: IDServiceClient | Period 1 | 201209 1 | 201210 2 | 201208 2 | 201209 2 | 201210 3 | 201208 3 | 201209 4 | 201208 4 | 201209 4 | 201210 Which are client services but the ranks of the temporary table, not those who are already inserted This is what i have: Table periods: DECLARE @i int DECLARE @mm int DECLARE @yyyy int, DECLARE @StartDate datetime DECLARE @EndDate datetime set @EndDate = (SELECT GETDATE()) set @StartDate = (SELECT DATEADD(MONTH, -3,GETDATE())) CREATE TABLE #Periods (Period int) set @i = 0 WHILE @i <= DATEDIFF(MONTH, @StartDate , @EndDate ) BEGIN SET @mm= DATEPART(MONTH, DATEADD(MONTH, @i, @FechaInicio)) SET @yyyy= DATEPART(YEAR, DATEADD(MONTH, @i, @FechaInicio)) INSERT INTO #Periods (Period) VALUES (CAST(@yyyy as varchar(4)) + RIGHT('00'+CONVERT(varchar(6), @mm), 2)) SET @i = @i + 1; END Relation between ServicesClients and Services: SELECT s.Name, sc.IDClient FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID Services already done and when: SELECT s.Name, scd.Period FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID JOIN ServicesClientsDone AS scd ON scd.IDServiceClient = sc.IDServiceClient

    Read the article

  • Stored Procedure, 'incorrect syntax error'

    - by jacksonSD
    Attempting to figure out sp's, and I'm getting this error: "Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'Procedure'." the error seems to be on the if, but I can drop other existing tables with stored procedures the exact same way so I'm not clear on why this isn't working. can anyone shed some light? Begin Set nocount on Begin Try Create Procedure uspRecycle as if OBJECT_ID('Recycle') is not null Drop Table Recycle create table Recycle (RecycleID integer constraint PK_integer primary key, RecycleType nchar(10) not null, RecycleDescription nvarchar(100) null) insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('1','Compost','Product is compostable, instructions included in packaging') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('2','Return','Product is returnable to company for 100% reuse') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('3','Scrap','Product is returnable and will be reclaimed and reprocessed') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('4','None','Product is not recycleable') End Try Begin Catch DECLARE @ErrMsg nvarchar(4000); SELECT @ErrMsg = ERROR_MESSAGE(); Throw 50001, @ErrMsg, 1; End Catch -- checking to see if table exists and is loaded: If (Select count(*) from Recycle) >1 begin Print 'Recycle table created and loaded '; Print getdate() End set nocount off End

    Read the article

  • SQL - Query to display average as either "longer than" or "shorter than"

    - by user1840801
    Here are the tables I've created: CREATE TABLE Plane_new (Pnum char(3), Feature varchar2(20), Ptype varchar2(15), primary key (Pnum)); CREATE TABLE Employee_new (eid char(3), ename varchar(10), salary number(7,2), mid char(3), PRIMARY KEY (eid), FOREIGN KEY (mid) REFERENCES Employee_new); CREATE TABLE Pilot_new (eid char(3), Licence char(9), primary key (eid), foreign key (eid) references Employee_new on delete cascade); CREATE TABLE FlightI_new (Fnum char(4), Fdate date, Duration number(2), Pid char(3), Pnum char(3), primary key (Fnum), foreign key (Pid) references Pilot_new (eid), foreign key (Pnum) references Plane_new); And here is the query I must complete: For each flight, display its number, the name of the pilot who implemented the flight and the words ‘Longer than average’ if the flight duration was longer than average or the words ‘Shorter than average’ if the flight duration was shorter than or equal to the average. For the column holding the words ‘Longer than average’ or ‘Shorter than average’ make a header Length. Here is what I've come up with - with no luck! SELECT F.Fnum, E.ename, CASE Length WHEN F.Duration>(SELECT AVG(F.Duration) FROM FlightI_new F) THEN "Longer than average" WHEN F.Duration<=(SELECT AVG(F.Duration) FROM FlightI_new F) THEN 'Shorter than average' END FROM FlightI_new F LEFT OUTER JOIN Employee_new E ON F.Pid=E.eid GROUP BY F.Fnum, E.ename; Where am I going wrong?

    Read the article

  • Creating an email notification system based on polling database rows

    - by Ashish Sharma
    I have to design an email notification system based on the following requirements: The email notifications would be created based on polling rows in a Mysql 5.5 DB table when they are in a particular 'Completed' state. The email notification should be sent out in no more than 5 minutes from the time the row was created in the DB table (At the time of DB table row creation the state of the row might not be 'Completed'). Once 5 minutes for the DB table row expire in reaching the 'Completed' state, separate email notification need to be sent (basically telling the user that the original email notification would be delayed) and then sending the email notification as and when the row state reaches to being 'Completed'. The rest of the system requirements are : Adding relevant checks to monitor the whole system via MBeans interface. The system should be scalable so that if the rate of DB table rows creation increases so does the Email notification system be able to ramp up. So I request suggestions on following lines: What approach should I take in solving the problem described from a programming/Design pattern point of view? Suggestion for any third party plugin/software that can be used to solve the problem described? Points to take care regarding scalability and monitoring the health of the system? Java is the language of preference but I am open to using off the shelf components that can be interfaced with Java language or provide standard ports for communication. Currently I do have an in house grown system (written in Java) that is catering to the specified requirements, but it's now crumbling under increased load and now I want to give the problem a fresh look. thanks in advance Ashish

    Read the article

  • Bootloader error Ubuntu 12.04, system goes to Grub-rescue instead of booting

    - by user83508
    I am trying to install ubuntu 12.04 on my system but it is constantly giving me bootloader install fail error. I have tried to lot to solve this issue but reading articles over the internet but still no gain. Firstly since the bootloader was not getting installed I tried to install it on all the alternative paths given in the installer, failing with I selected install ubuntu without bootloader. Then I tried to manually install bootloader via terminal at try ubuntu via grub-install, but I was not able to do that. Then I tried using boot-repair and it was also not able to install the bootloader because after it my system shows grub-rescue. I tried to use boot-repair and install bootloader on a seperate partition mounted to /boot and still my system is booting and it still shows grub-rescue. The error which my system shows during boot is: Error : no such device : 04ac0510-bd4f-43b8-b885-b885-11c4dec21db8 I am not dual booting and ubuntu is the only OS I am installing. I am using Raid 0 with two blue western digital hard drive so I am not sure whether it is right or not. The details given by boot-repair are in the below mentioned link; http://paste.ubuntu.com/1147208/ Afterwards, I made one more change I installed ubuntu again and this time I installed the bootloader at a different partition on /boot. After this the bootloader error has gone but I am still not able to boot to ubuntu as I get the same error I was getting before. I have not installed dmraid, I feel it is neccessary for Raid0, but I thought ubuntu already has Raid drivers. Moreover in the dmraid installation instructions for 12.04, I used the one for 10.04 and selected to install bootloader at the partitions from the dropdown. This time the installation finished normally without an error but still I am not able to boot my system as the same error shows during booting this time also. Now I am stucked and I have no clue on how I can boot my system. Please tell me how can I boot my system.

    Read the article

  • System testing - making sure the system conforms to specification. Validation?

    - by user970696
    After weeks of research I have nearly completed my thesis, yet I am unable to clear up my confusion contained in all previous threads here (and in many books): During system testing, we check the system function against system analysis (functional system design) - but that would fit to a definition of verification according to many books. But I follow ISO12207, which considers all testing as validation (making sure work product meets requirement for intended use). How can I justify that unit testing or system testing is validation, even though when I check it against specification? Which fullfils the definiton of verification? When testing that e.g. "Save button" works, is it validation? This picture shows my understanding of V&V, so different from many other sources, including ISTQB etc. Essential problem I have is that a book using the same picture also states on another place that: test activities in the area of validation are usability, alpha and beta testing. For verification, testable system requirements are defined whose correct implementation can be tested through system tests. Isn't that the opposite of what the picture says? Most books present the following picture, where validation is just making sure that customer needs are satisfied. Mind you that according to ISO, validation activity is testing.

    Read the article

  • Web migration of a VB6 system with VWG

    - by Webgui
    Brinks Bolivia eSAC System (Customer Service) allows to register all different kinds of contacts for a customer; addition to maintaining an updated status of each service or customer request, to have accurate information and perform the appropriate procedures for all applications. The system was originally developed in VB6 and since web access was essential it was offered via Citrix. Since the application's performance was a critical issue as well as the need to offer the system without specific installations the company looked for a solution that would solve those drawbacks of using Citrix. Searching for a solution that would allow it to offer the eSAC system over the web without the need for specific client installations and provide sufficient performance levels even when there is limited bandwidth lead Brinks to a decision to migrate their VB6 Customer Service system to to Visual WebGui. "Developing on Visual WebGui we were able to migrate the system to web environment and even add new features in less time which allows us to offer it over a standard web browser with better performance and no installations as was required with Citrix," concluded Alexander Cuellar. The full article and screenshots of the system are available here.

    Read the article

  • Can't start system due to mdadm failing

    - by user101212
    I used to have a 5-disk RAID5 partition, all working very well. I have then decided to add 3 more disks on it, totaling 8 equal disks. I've opened Webmin and just asked to add the disks. Then I've realized the three disks had NTFS partitions, wich mdadm didn't complain, so I tried to stop the growing to remove the Windows partitions. I've tried to remove a disk using the same Webmin, but (as you might guess and call me fool...), the system became unstable. By restarting the system, I've started receiving these messages: "udev[126]: timeout: killing '/sbin/mdadm --incremental /dev/sdh1' [311]" "udev[124]: timeout: killing '/sbin/mdadm --detail --export /dev/md0' [316]" I've formated the system disk, hoping to get a system up and running. I did that with all RAID disks disconected, so everything was fine. I then reconnected the disks, wich was also ok. And finally installed mdadm using apt-get. By reboot, the system has found the mdadm intention of growing the system, so the same messages appear again. I other words: I can't even reach a command prompt to do something. Any ideas of what to do? I believe I could turn off the system, disconnect the disks and look for the mdadm.conf file. Would that be a good idead? I'm no Linux expert, so I'm really lost here.

    Read the article

  • TSQL Conditionally Select Specific Value

    - by Dzejms
    This is a follow-up to #1644748 where I successfully answered my own question, but Quassnoi helped me to realize that it was the wrong question. He gave me a solution that worked for my sample data, but I couldn't plug it back into the parent stored procedure because I fail at SQL 2005 syntax. So here is an attempt to paint the broader picture and ask what I actually need. This is part of a stored procedure that returns a list of items in a bug tracking application I've inherited. There are are over 100 fields and 26 joins so I'm pulling out only the mostly relevant bits. SELECT tickets.ticketid, tickets.tickettype, tickets_tickettype_lu.tickettypedesc, tickets.stage, tickets.position, tickets.sponsor, tickets.dev, tickets.qa, DATEDIFF(DAY, ticket_history_assignment.savedate, GETDATE()) as 'daysinqueue' FROM dbo.tickets WITH (NOLOCK) LEFT OUTER JOIN dbo.tickets_tickettype_lu WITH (NOLOCK) ON tickets.tickettype = tickets_tickettype_lu.tickettypeid LEFT OUTER JOIN dbo.tickets_history_assignment WITH (NOLOCK) ON tickets_history_assignment.ticketid = tickets.ticketid AND tickets_history_assignment.historyid = ( SELECT MAX(historyid) FROM dbo.tickets_history_assignment WITH (NOLOCK) WHERE tickets_history_assignment.ticketid = tickets.ticketid GROUP BY tickets_history_assignment.ticketid ) WHERE tickets.sponsor = @sponsor The area of interest is the daysinqueue subquery mess. The tickets_history_assignment table looks roughly as follows declare @tickets_history_assignment table ( historyid int, ticketid int, sponsor int, dev int, qa int, savedate datetime ) insert into @tickets_history_assignment values (1521402, 92774,20,14, 20, '2009-10-27 09:17:59.527') insert into @tickets_history_assignment values (1521399, 92774,20,14, 42, '2009-08-31 12:07:52.917') insert into @tickets_history_assignment values (1521311, 92774,100,14, 42, '2008-12-08 16:15:49.887') insert into @tickets_history_assignment values (1521336, 92774,100,14, 42, '2009-01-16 14:27:43.577') Whenever a ticket is saved, the current values for sponsor, dev and qa are stored in the tickets_history_assignment table with the ticketid and a timestamp. So it is possible for someone to change the value for qa, but leave sponsor alone. What I want to know, based on all of these conditions, is the historyid of the record in the tickets_history_assignment table where the sponsor value was last changed so that I can calculate the value for daysinqueue. If a record is inserted into the history table, and only the qa value has changed, I don't want that record. So simply relying on MAX(historyid) won't work for me. Quassnoi came up with the following which seemed to work with my sample data, but I can't plug it into the larger query, SQL Manager bitches about the WITH statement. ;WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY ticketid ORDER BY savedate DESC) AS rn FROM @Table ) SELECT rl.sponsor, ro.savedate FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.savedate FROM rows rc JOIN rows rn ON rn.ticketid = rc.ticketid AND rn.rn = rc.rn + 1 AND rn.sponsor <> rc.sponsor WHERE rc.ticketid = rl.ticketid ORDER BY rc.rn ) ro WHERE rl.rn = 1 I played with it yesterday afternoon and got nowhere because I don't fundamentally understand what is going on here and how it should fit into the larger context. So, any takers? UPDATE Ok, here's the whole thing. I've been switching some of the table and column names in an attempt to simplify things so here's the full unedited mess. snip - old bad code Here are the errors: Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 159 Incorrect syntax near ';'. Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 179 Incorrect syntax near ')'. Line numbers are of course not correct but refer to ;WITH rows AS And the ')' char after the WHERE rl.rn = 1 ) Respectively Is there a tag for extra super long question? UPDATE #2 Here is the finished query for anyone who may need this: CREATE PROCEDURE [dbo].[usp_GetProjectRecordsByAssignment] ( @assigned numeric(18,0), @assignedtype numeric(18,0) ) AS SET NOCOUNT ON WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY recordid ORDER BY savedate DESC) AS rn FROM projects_history_assignment ) SELECT projects_records.recordid, projects_records.recordtype, projects_recordtype_lu.recordtypedesc, projects_records.stage, projects_stage_lu.stagedesc, projects_records.position, projects_position_lu.positiondesc, CASE projects_records.clientrequested WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientrequested, projects_records.reportingmethod, projects_reportingmethod_lu.reportingmethoddesc, projects_records.clientaccess, projects_clientaccess_lu.clientaccessdesc, projects_records.clientnumber, projects_records.project, projects_lu.projectdesc, projects_records.version, projects_version_lu.versiondesc, projects_records.projectedversion, projects_version_lu_projected.versiondesc AS projectedversiondesc, projects_records.sitetype, projects_sitetype_lu.sitetypedesc, projects_records.title, projects_records.module, projects_module_lu.moduledesc, projects_records.component, projects_component_lu.componentdesc, projects_records.loginusername, projects_records.loginpassword, projects_records.assistedusername, projects_records.browsername, projects_browsername_lu.browsernamedesc, projects_records.browserversion, projects_records.osname, projects_osname_lu.osnamedesc, projects_records.osversion, projects_records.errortype, projects_errortype_lu.errortypedesc, projects_records.gsipriority, projects_gsipriority_lu.gsiprioritydesc, projects_records.clientpriority, projects_clientpriority_lu.clientprioritydesc, projects_records.scheduledstartdate, projects_records.scheduledcompletiondate, projects_records.projectedhours, projects_records.actualstartdate, projects_records.actualcompletiondate, projects_records.actualhours, CASE projects_records.billclient WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS billclient, projects_records.billamount, projects_records.status, projects_status_lu.statusdesc, CASE CAST(projects_records.assigned AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' WHEN '20000' THEN 'Client' WHEN '30000' THEN 'Tech Support' WHEN '40000' THEN 'LMI Tech Support' WHEN '50000' THEN 'Upload' WHEN '60000' THEN 'Spider' WHEN '70000' THEN 'DB Admin' ELSE rtrim(users_assigned.nickname) + ' ' + rtrim(users_assigned.lastname) END AS assigned, CASE CAST(projects_records.assigneddev AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assigneddev.nickname) + ' ' + rtrim(users_assigneddev.lastname) END AS assigneddev, CASE CAST(projects_records.assignedqa AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedqa.nickname) + ' ' + rtrim(users_assignedqa.lastname) END AS assignedqa, CASE CAST(projects_records.assignedsponsor AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedsponsor.nickname) + ' ' + rtrim(users_assignedsponsor.lastname) END AS assignedsponsor, projects_records.clientcreated, CASE projects_records.clientcreated WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientcreateddesc, CASE projects_records.clientcreated WHEN '1' THEN rtrim(clientusers_createuser.firstname) + ' ' + rtrim(clientusers_createuser.lastname) + ' (Client)' ELSE rtrim(users_createuser.nickname) + ' ' + rtrim(users_createuser.lastname) END AS createuser, projects_records.createdate, projects_records.savedate, projects_resolution.sitesaffected, projects_sitesaffected_lu.sitesaffecteddesc, DATEDIFF(DAY, projects_history_assignment.savedate, GETDATE()) as 'daysinqueue', projects_records.iOnHitList, projects_records.changetype FROM dbo.projects_records WITH (NOLOCK) LEFT OUTER JOIN dbo.projects_recordtype_lu WITH (NOLOCK) ON projects_records.recordtype = projects_recordtype_lu.recordtypeid LEFT OUTER JOIN dbo.projects_stage_lu WITH (NOLOCK) ON projects_records.stage = projects_stage_lu.stageid LEFT OUTER JOIN dbo.projects_position_lu WITH (NOLOCK) ON projects_records.position = projects_position_lu.positionid LEFT OUTER JOIN dbo.projects_reportingmethod_lu WITH (NOLOCK) ON projects_records.reportingmethod = projects_reportingmethod_lu.reportingmethodid LEFT OUTER JOIN dbo.projects_lu WITH (NOLOCK) ON projects_records.project = projects_lu.projectid LEFT OUTER JOIN dbo.projects_version_lu WITH (NOLOCK) ON projects_records.version = projects_version_lu.versionid LEFT OUTER JOIN dbo.projects_version_lu projects_version_lu_projected WITH (NOLOCK) ON projects_records.projectedversion = projects_version_lu_projected.versionid LEFT OUTER JOIN dbo.projects_sitetype_lu WITH (NOLOCK) ON projects_records.sitetype = projects_sitetype_lu.sitetypeid LEFT OUTER JOIN dbo.projects_module_lu WITH (NOLOCK) ON projects_records.module = projects_module_lu.moduleid LEFT OUTER JOIN dbo.projects_component_lu WITH (NOLOCK) ON projects_records.component = projects_component_lu.componentid LEFT OUTER JOIN dbo.projects_browsername_lu WITH (NOLOCK) ON projects_records.browsername = projects_browsername_lu.browsernameid LEFT OUTER JOIN dbo.projects_osname_lu WITH (NOLOCK) ON projects_records.osname = projects_osname_lu.osnameid LEFT OUTER JOIN dbo.projects_errortype_lu WITH (NOLOCK) ON projects_records.errortype = projects_errortype_lu.errortypeid LEFT OUTER JOIN dbo.projects_resolution WITH (NOLOCK) ON projects_records.recordid = projects_resolution.recordid LEFT OUTER JOIN dbo.projects_sitesaffected_lu WITH (NOLOCK) ON projects_resolution.sitesaffected = projects_sitesaffected_lu.sitesaffectedid LEFT OUTER JOIN dbo.projects_gsipriority_lu WITH (NOLOCK) ON projects_records.gsipriority = projects_gsipriority_lu.gsipriorityid LEFT OUTER JOIN dbo.projects_clientpriority_lu WITH (NOLOCK) ON projects_records.clientpriority = projects_clientpriority_lu.clientpriorityid LEFT OUTER JOIN dbo.projects_status_lu WITH (NOLOCK) ON projects_records.status = projects_status_lu.statusid LEFT OUTER JOIN dbo.projects_clientaccess_lu WITH (NOLOCK) ON projects_records.clientaccess = projects_clientaccess_lu.clientaccessid LEFT OUTER JOIN dbo.users users_assigned WITH (NOLOCK) ON projects_records.assigned = users_assigned.userid LEFT OUTER JOIN dbo.users users_assigneddev WITH (NOLOCK) ON projects_records.assigneddev = users_assigneddev.userid LEFT OUTER JOIN dbo.users users_assignedqa WITH (NOLOCK) ON projects_records.assignedqa = users_assignedqa.userid LEFT OUTER JOIN dbo.users users_assignedsponsor WITH (NOLOCK) ON projects_records.assignedsponsor = users_assignedsponsor.userid LEFT OUTER JOIN dbo.users users_createuser WITH (NOLOCK) ON projects_records.createuser = users_createuser.userid LEFT OUTER JOIN dbo.clientusers clientusers_createuser WITH (NOLOCK) ON projects_records.createuser = clientusers_createuser.userid LEFT OUTER JOIN dbo.projects_history_assignment WITH (NOLOCK) ON projects_history_assignment.recordid = projects_records.recordid AND projects_history_assignment.historyid = ( SELECT ro.historyid FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.historyid FROM rows rc JOIN rows rn ON rn.recordid = rc.recordid AND rn.rn = rc.rn + 1 AND rn.assigned <> rc.assigned WHERE rc.recordid = rl.recordid ORDER BY rc.rn ) ro WHERE rl.rn = 1 AND rl.recordid = projects_records.recordid ) WHERE (@assignedtype='0' and projects_records.assigned = @assigned) OR (@assignedtype='1' and projects_records.assigneddev = @assigned) OR (@assignedtype='2' and projects_records.assignedqa = @assigned) OR (@assignedtype='3' and projects_records.assignedsponsor = @assigned) OR (@assignedtype='4' and projects_records.createuser = @assigned)

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • SQL Server DATA Tools CTP4 Released!

    - by hassanfadili
    SQL Server team has released the new SQL Server Data Tools CTP4. Congratulations and Thanks to Gert Drapers and his team with this great milestone. To lear more about this SSDT CTP4 Release, check: What’s new in SQL Server Data Tools CTP4?http://blogs.msdn.com/b/ssdt/archive/2011/11/21/what-s-new-in-sql-server-data-tools-ctp4.aspxSQL Server Data Tools CTP4 vs. VS2010 Database Projectshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspxTop VSDB->SSDT Project Conversion Issueshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion issues.aspxUninstalling SQL Server Developer Tools CTP3 (Code-named “Juneau”) http://blogs.msdn.com/b/ssdt/archive/2011/11/21/uninstalling-ssdt-ctp3-code-named-juneau.aspxThis actually points to a nifty PowerShell script to help you uninstall.Have Fun.v

    Read the article

  • PASS Conference 2011 Topic: Multitenant Design and Sharding with SQL Azure

    - by Herve Roggero
    I am really happy to announce that I have been accepted as a speaker at the 2011 PASS Conference in Seattle. The topic? It will be about SQL Azure scalability using shards, and the Data Federation feature of SQL Azure. I will also talk extensively about the community open-source sharding library Enzo SQL Shard (enzosqlshard.codeplex.com) and show how to make the most out of it. In general, the presentation will provide details about how to properly design an application for sharding, how to make it work for SQL Server, SQL Azure, and how to leverage the upcoming Data Federation technology that Microsoft is planning. The primary objective is to turn sharding an implementation concern, not a development concern. Using a library like Enzo SQL Shard will help you achieve this objective. If you come to PASS Summit this year, come see me and mention you saw this blog!

    Read the article

  • Microsoft Press Deal of the day 4/Sep/2012 - Programming Microsoft® SQL Server® 2012

    - by TATWORTH
    Today's deal of the day from Microsoft Press at http://shop.oreilly.com/product/0790145322357.do?code=MSDEAL is Programming Microsoft® SQL Server® 2012 "Your essential guide to key programming features in Microsoft® SQL Server® 2012 Take your database programming skills to a new level—and build customized applications using the developer tools introduced with SQL Server 2012. This hands-on reference shows you how to design, test, and deploy SQL Server databases through tutorials, practical examples, and code samples. If you’re an experienced SQL Server developer, this book is a must-read for learning how to design and build effective SQL Server 2012 applications."

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >