Search Results

Search found 10683 results on 428 pages for 'the rowland group'.

Page 373/428 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Accelerometer stops delivering samples when the screen is off on Droid/Nexus One even with a WakeLoc

    - by William
    I have some code that extends a service and records onSensorChanged(SensorEvent event) accelerometer sensor readings on Android. I would like to be able to record these sensor readings even when the device is off (I'm careful with battery life and it's made obvious when it's running). While the screen is on the logging works fine on a 2.0.1 Motorola Droid and a 2.1 Nexus One. However, when the phone goes to sleep (by pushing the power button) the screen turns off and the onSensorChanged events stop being delivered (verified by using a Log.e message every N times onSensorChanged gets called). The service acquires a wakeLock to ensure that it keeps running in the background; but, it doesn't seem to have any effect. I've tried all the various PowerManager. wake locks but none of them seem to matter. _WakeLock = _PowerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "My Tag"); _WakeLock.acquire(); There have been conflicting reports about whether or not you can actually get data from the sensors while the screen is off... anyone have any experience with this on a more modern version of Android (Eclair) and hardware? This seems to indicate that it was working in Cupcake: http://groups.google.com/group/android-developers/msg/a616773b12c2d9e5 Thanks! PS: The exact same code works as intended in 1.5 on a G1. The logging continues when the screen turns off, when the application is in the background, etc.

    Read the article

  • Javafx Layout problem with VBox & HBoxes

    - by pgpatrudu
    When I run the following, I noticed spacing between nodes; My research revealed that - 1) If I do not add any text to win1 via setwininfo, then there is no problem. 2) When I include this code in a larger app, and when a button click is reveived from some where else, mysteriously the spacing gets corrected. 3) I tried binding the win1 & win2 nodes to content of scene - but no luck. def mainframew : Integer = 250; def mainframeh : Integer = 500; class CtrlWindow extends CustomNode { var wininfo : String; var fsize : Integer; var width : Integer; public function setWinInfo(info : String) { wininfo = info; } override protected function create () : Node { var win = Group { content: [ VBox { content: [ Text { font : Font { size: fsize } content : bind wininfo textAlignment : TextAlignment.CENTER // did not work } ] } Rectangle { width: width, height: 25 fill: Color.TRANSPARENT strokeWidth : 2 stroke : Color.SILVER } ] } return win; } } public function run(args : String[]) { var win1 = CtrlWindow{fsize:14, width:mainframew}; var win2 = CtrlWindow{fsize:14, width:mainframew}; win1.setWinInfo("The spacing between these nodes"); win2.setWinInfo("corrects itself after receiving an event"); Stage { title : "MyApp" scene: Scene { width: mainframew height: mainframeh content: [ VBox { spacing: 0 content: [ HBox { content: win1 } HBox { content: win2 } ] } ] } }

    Read the article

  • Audio Streaming Latency

    - by killianmcc
    I'm writing a UDP local area network video chat system and have got the video and audio streams working. However I'm experiencing a little latency (about half a second) in the audio and was wondering what codecs would provide the least latency. I'm using NAudio (http://naudio.codeplex.com/) which provides me access to the following codecs for streaming; Speex Narrow Band (VBR) Speex Wide Band (16kHz)(VBR) Speex Ultra Wide Band (32kHz)(VBR) DSP Group TrueSpeech (8.5kbps) GSM 6.10 (13kbps) Microsoft ADPCM (32.8kbps) G.711 a-law (64kbps) G.722 16kHz (64kbps) G.711 mu-law (64kbps) PCM 8kHz 16 bit uncompressed (128kbps) I've tried them out and I'm not noticing much difference. Is there any others that I should download and try to reduce latency? I'm only going to be sending voice over the connection but I'm not really worried about quality or background noises too much. UPDATE I'm sending the audio in blocks like so; waveIn = new WaveIn(); waveIn.BufferMilliseconds = 50; waveIn.DeviceNumber = inputDeviceNumber; waveIn.WaveFormat = codec.RecordFormat; waveIn.DataAvailable += waveIn_DataAvailable; void waveIn_DataAvailable(object sender, WaveInEventArgs e) { if (connected) { byte[] encoded = codec.Encode(e.Buffer, 0, e.BytesRecorded); udpSender.Send(encoded, encoded.Length); } }

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • Pre-done SQLs to be converted to Rails' style moduls

    - by Hoornet
    I am a Rails newbie and would really appreciate if someone converted these SQLs to complete modules for rails. I know its a lot to ask but I can't just use find_by_sql for all of them. Or can I? These are the SQLs (they run on MS-SQL): 1) SELECT STANJA_NA_DAN_POSTAVKA.STA_ID, STP_DATE, STP_TIME, STA_OPIS, STA_SIFRA, STA_POND FROM STANJA_NA_DAN_POSTAVKA INNER JOIN STANJA_NA_DAN ON(STANJA_NA_DAN.STA_ID=STANJA_NA_DAN_POSTAVKA.STA_ID) WHERE ((OSE_ID=10)AND (STANJA_NA_DAN_POSTAVKA.STP_DATE={d '2010-03-30'}) AND (STANJA_NA_DAN_POSTAVKA.STP_DATE<={d '2010-03-30'})) 2) SELECT ZIGI_OBDELANI.OSE_ID, ZIGI_OBDELANI.DOG_ID AS DOG_ID, ZIGI_OBDELANI.ZIO_DATUM AS DATUM, ZIGI_PRICETEK.ZIG_TIME_D AS ZIG_PRICETEK, ZIGI_KONEC.ZIG_TIME_D AS ZIG_KONEC FROM (ZIGI_OBDELANI INNER JOIN ZIGI ZIGI_PRICETEK ON ZIGI_OBDELANI.ZIG_ID_PRICETEK = ZIGI_PRICETEK.ZIG_ID) INNER JOIN ZIGI ZIGI_KONEC ON ZIGI_OBDELANI.ZIG_ID_KONEC = ZIGI_KONEC.ZIG_ID WHERE (ZIGI_OBDELANI.OSE_ID = 10) AND (ZIGI_OBDELANI.ZIO_DATUM = {d '2010-03-30'}) AND (ZIGI_OBDELANI.ZIO_DATUM <= {d '2010-03-30'}) AND (ZIGI_PRICETEK.ZIG_VELJAVEN < 0) AND (ZIGI_KONEC.ZIG_VELJAVEN < 0) ORDER BY ZIGI_OBDELANI.OSE_ID, ZIGI_PRICETEK.ZIG_TIME ASC 3) SELECT STA_ID, SUM(STP_TIME) AS SUM_STP_TIME, COUNT(STA_ID) FROM STANJA_NA_DAN_POSTAVKA WHERE ((STP_DATE={d '2010-03-30'}) AND (STP_DATE<={d '2010-03-30'}) AND (STA_ID=3) AND (OSE_ID=10)) GROUP BY STA_ID 4) SELECT DATUM, TDN_ID, TDN_OPIS, URN_OPIS, MOZNI_PROBLEMI, PRIHOD, ODHOD, OBVEZNOST, ZAKLJUCEVANJE_DATUM FROM OBRACUNAJ_DAN WHERE ((OSE_ID=10) AND (DATUM={d '2010-02-28'}) AND (DATUM<={d '2010-03-30'})) ORDER BY DATUM These SQLs are daily working hours and I got them as is. Also I got Database with it which (as you can see from the SQL-s) is not in Rails conventions. As a P.S.: 1)Things like STP_DATE={d '2010-03-30'}) are of course dates (in Slovenian date notation) and will be replaced with a variable (date), so that the user could choose date from and date to. 2) All of this data will be shown in the same page in the table,so maybe all in one module? Or many?; if this helps, maybe. So can someone help me? Its for my work and its my 1st project and I am a Rails newbie and the bosses are getting inpatient(they are getting quite loud actually) Thank you very very much!

    Read the article

  • how do I block my rails app from being hit by bots?

    - by codeman73
    I'm not even sure I'm using the right terminology, whether this is actually bots or not. I didn't want to use the word 'spam' because it's not like I have comments or posts that are being created/spammed. It looks more like something is making the same repeated request to my domain, which is what made me think it was some kind of bot. I've opened my first rails app to the 'public', which is a really a small group of users, <50 currently. That was last Friday. I started having performance issues today, so I looked at the log and I see tons of these RoutingErrors ActionController::RoutingError (No route matches "/portalApp/APF/pages/business/util/whichServer.jsp" with {:method=>:get}): They are filling up the log and I'm assuming this is causing the slowdown. Note the .jsp on the end and this is a rails app, so I've got no urls remotely like this in my app. I mean, the /portalApp I don't even have, so I don't know where this is coming from. This is hosted at Dreamhost and I chatted with one of their support people, and he suggested a couple sites that detail using htaccess to block things. But that looks like you need to know the IP or domain that the requests are coming from, which I don't. How can I block this? How can I find the IP or domain from the request? Any other suggestions?

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • SQL code to display counts() of value retrieved from another column

    - by Doctor Trout
    I have three tables (these are the relevant columns): Table1 bookingid, person, role Table2 bookingid, projectid Table3 projectid, project, numberofrole1, numberofrole2 Table1.role can take two values: "role1" or "role2". What I want to do is to show which projects don't have the correct number of roles in Table1. The number of roles there there should be for each role is in Table3. For example, if Table1 contains these three rows: bookingid, person, role 7, Tim, role1 7, Bob, role1, 7, Charles, role2 and Table2 bookingid, projectid 7, 1 and Table3 projectid, project, numberofrole1, numberofrole2 1, Test1, 2, 2 I would like the results to show that there are not the correct number of role2s for project Test1. To be honest, something like this is a bit beyond my ability, so I'm open to suggestions on the best way to do this. I'm using sqlite and php (it's only a small project). I suppose I could do something with the php at the end once I've got my results, but I wondered if there was a better way to do it with sqlite. I started by doing something like this: SELECT project, COUNT(numberofrole1) as "Role" FROM Table1 JOIN Table2 USING (projectid) JOIN Table3 USING (bookingid) WHERE role="role1" GROUP BY project But I can't work out how to compare the value returned as "Role" with the value got from numberofrole1 Any help is gratefully received.

    Read the article

  • How can I filter then modify e-mails using IMAP?

    - by swolff1978
    I have asked this question in a different post here on SO: How can a read receipt be suppressed? I have been doing some research of my own to try and solve this problem and accessing the e-mail account via IMAP seems like it is going to be a good solution. I have successfully been able to access my own Inbox and mark messages as read with no issue. I have been asked to preform the same task on an Inbox that contains over 23,000 emails. I would like to run the test on a small amount of e-mails from that inbox before letting the whole 23,000 get it. Here is the code I have been running via telnet: . LOGIN [email protected] password . SELECT Inbox . STORE 1:* flags \Seen 'this line marks all the e-mails as read So my question is how can i execute that store command on a specific group of e-mails... say emails that are going to / coming from a specific account? Is there a way to like concatenate the commands? like a FETCH then the STORE? Or is there a better way to go about getting a collection of e-mails based on certain criteria and then modifying ONLY those e-mails that can be accomplished through IMAP?

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • Call stored proc using xml output from a table

    - by user263097
    Under a tight deadline and I know I can figure this out eventually but I don't have much time to do it on my own. I have a table that has columns for customer id and account number among many other additional columns. There could be many accounts for a single customer (Many rows with the same customer id but different account number). For each customer in the table I need to call a stored procedure and pass data from my table as xml in the following format. Notice that the xml is for all of the customers accounts. <Accounts> <Account> <AccountNumber>12345</AccountNumber> <AccountStatus>Open</AccountStatus> </Account> <Account> <AccountNumber>54321</AccountNumber> <AccountStatus>Closed</AccountStatus> </Account> </Accounts> So I guess I need help with 2 things. First, how to get the data in this xml format. I assuming I'll use some variation of FOR XML. The other thing is how do I group by customer id and then call a sproc for each customer id?

    Read the article

  • Bootstrap inline button dropdown within <p> jumbotron

    - by C.B.
    Currently I have a jumbotron setup with some paragraph text, and I would like to stick a button dropdown inline with the text. Dropdown button <span class="btn-group"> <button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown"> Button... <span class="caret"></span> </button> <ul class="dropdown-menu" role="menu"> <li><a href="#">Opt 1</a></li> <li><a href="#">Opt 2</a></li> </ul> </span> Jumbotron <div class="jumbotron"> <h1>Hello!</h1> <p>Welcome</p> <p>Another paragraph <!-- dropdown is here --> </p> </div> <!-- jumbotron --> If the dropdown is within the <p> tag, it does not "dropdown" (but renders). If it is outside of the <p> tag it functions fine, but I would like it to be inline with the text and need the text to be in the <p> tag to get the style. Any ideas? Things to note -- If I replace the <span> tags with <div> tags, it will work fine within the <p> tags, but won't be inline.

    Read the article

  • how to read string part in java

    - by Gandalf StormCrow
    Hello everyone, I have this string : <meis xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" uri="localhost/naro-nei" onded="flpSW531213" identi="lemenia" id="75" lastStop="bendi" xsi:noNamespaceSchemaLocation="http://localhost/xsd/postat.xsd xsd/postat.xsd"> How can I get lastStop property value in JAVA? This regex worked when tested on http://www.myregexp.com/ But when I try it in java I don't see the matched text, here is how I tried : import java.util.regex.Pattern; import java.util.regex.Matcher; public class SimpleRegexTest { public static void main(String[] args) { String sampleText = <meis xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" uri=\"localhost/naro-nei\" onded=\"flpSW531213\" identi=\"lemenia\" id=\"75\" lastStop=\"bendi\" xsi:noNamespaceSchemaLocation=\"http://localhost/xsd/postat.xsd xsd/postat.xsd\">"; String sampleRegex = "(?<=lastStop=[\"']?)[^\"']*"; Pattern p = Pattern.compile(sampleRegex); Matcher m = p.matcher(sampleText); if (m.find()) { String matchedText = m.group(); System.out.println("matched [" + matchedText + "]"); } else { System.out.println("didn’t match"); } } }

    Read the article

  • How do I run an array through an IF statement in rails?

    - by codyvbrown
    I am creating an application that highlights user messages from a stream based on whether or not the user has been 'vouched'. It works fine if it's setup for a single author. For example controller: @vouch = Vouch.last.vouched_user_nickname view: <% Twitter::Search.new(params[:id]).each do |tweet| %> <li> <%= image_tag tweet.profile_image_url %> <% if @vouch.include? tweet.from_user %> <div class="flit_message_containerh"> <u> <a href="http://twitter.com/<%= tweet.from_user %>"> <%= tweet.from_user %></a></u> <%= linkup_mentions(auto_link(h tweet.text)) %> <div class="time_ago"> <%= link_to distance_of_time_in_words_to_now(tweet.created_at) , tweet %> <% else %> <div class="flit_message_container"> <u> <a href="http://twitter.com/<%= tweet.from_user %>"> <%= tweet.from_user %></a></u> <%= linkup_mentions(auto_link(h tweet.text)) %> <div class="time_ago"> <%= link_to distance_of_time_in_words_to_now(tweet.created_at) , tweet %> <% end %> But I'm having trouble doing it for multiple user nicknames. @vouch = Vouch.find(:all, :select => "vouched_user_nickname", :group => 'vouched_user_nickname' ) Any ideas would be greatly appreciated. I'm a rails noob.

    Read the article

  • SQL Server query

    - by carrot_programmer_3
    Hi, I have a SQL Server DB containing a registrations table that I need to plot on a graph over time. The issue is that I need to break this down by where the user registered from (e.g. website, wap site, or a mobile application). the resulting output data should look like this... [date] [num_reg_website] [num_reg_wap_site] [num_reg_mobileapp] 1 FEB 2010,24,35,64 2 FEB 2010,23,85,48 3 FEB 2010,29,37,79 etc... The source table is as follows... UUID(int), signupdate(datetime), requestsource(varchar(50)) some smple data in this table looks like this... 1001,2010-02-2:00:12:12,'website' 1002,2010-02-2:00:10:17,'app' 1003,2010-02-3:00:14:19,'website' 1004,2010-02-4:00:16:18,'wap' 1005,2010-02-4:00:18:16,'website' Running the following query returns one data column 'total registrations' for the website registrations but I'm not sure how to do this for multiple columns unfortunatly.... select CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME) as [signupdate], count(UUID) as 'total registrations' FROM [UserRegistrationRequests] WHERE requestsource = 'website' group by CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME)

    Read the article

  • Unsure how to come up with a good design

    - by Mewzer
    Hello there, I am having trouble coming up with a good design for a group of classes and was hoping that someone could give me some guidance on best practices. I have kept the classes and member functions generic to make the problem simpler. Essentially, I have three classes (lets call them A, B, and C) as follows: class A { ... int GetX( void ) const { return x; }; int GetY( void ) const { return y; }; private: B b; // NOTE: A "has-a" B int x; int y; }; class B { ... void SetZ( int value ) { z = value }; private: int z; C c; // NOTE: B "has-a" C }; class C { private: ... void DoSomething(int x, int y){ ... }; void DoSomethingElse( int z ){ ... }; }; My problem is as follows: Class A uses its member variables "x" and "y" a lot internally. Class B uses its member variable "z" a lot internally. Class B needs to call C::DoSomething(), but C::DoSomething() needs the values of X and Y in class A passed in as arguments. C::DoSomethingElse() is called from say another class (e.g. D), but it needs to invoke SetZ() in class B!. As you can see, it is a bit of a mess as all the classes need information from one another!. Are there any design patterns I can use?. Any ideas would be much appreciated ....

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • NServiceBus & MSMQ: How To Change the Default Permissions on the Queue?

    - by Amy T
    My team is on our first attempt at using NServiceBus (v2.0), using MSMQ as the backing storage. We're getting stuck on queue permissions. We're using it in a Web Forms application, where the user account the website runs under is not an administrator on the machine. When NServiceBus creates the MSMQ queue, it gives the local administrators group full control, and the local everyone and anonymous groups permissions to send messages. But then later, as part of initializing the queue, NServiceBus tries to read all of its messages. That's where we run into the permissions error. Since the website isn't running as an administrator, it's not allowed to read messages. How are other people dealing with this? Do your applications run as administrators? Or do you create the MSMQ queue in your code first, giving it the permissions you need, so that NServiceBus doesn't have to create it? Or is there a bit of configuration we're missing? Or are we likely writing our code that uses NServiceBus incorrectly to be running into this?

    Read the article

  • Querying using table-valued parameter

    - by antmx
    I need help please with writing a sproc, it takes a table-valued parameter @Locations, whose Type is defined as follows: CREATE TYPE [dbo].[tvpLocation] AS TABLE( [CountryId] [int] NULL, [ResortName] [nvarchar](100) NULL, [Ordinal] [int] NOT NULL, PRIMARY KEY CLUSTERED ( [Ordinal] ASC )WITH (IGNORE_DUP_KEY = OFF) ) @Locations will contain at least 1 row. Each row WILL have a non-null CountryId, and MAY have a non-null ResortName. Each row will have a unique Ordinal, the first being 0. The combinations of CountryId and ResortName in @Locations will be unique. The sproc needs to search against the following table structure. The image can be seen better by right-clicking it and View Image, or similar depending on your browser. Now this is where I'm stuck, the sproc should be able to find Tours where: The Tour's 1st TourHotel (Ordinal 0) has the same CountryId (and ResortName if specified) of the 1st row of @Locations (Ordinal 0). And also if @Locations has 1 row, the Tour must have additional TourHotels, ALL of which must be in the remaining CountryIds (and ResortNames if specified) of these remaining @Locations rows. Edit This is the code I finally used, based on Anthony Faull's suggestion. Thank you so much Anthony: select distinct T.Id from tblTour T join tblTourHotel TH on TH.TourId = T.Id join tblHotel H ON H.Id = TH.HotelId JOIN @Locations L ON ( ( L.Ordinal = 0 AND TH.Ordinal = 0 ) OR ( L.Ordinal > 0 AND TH.Ordinal > 0 ) ) AND L.CountryId = H.CountryId AND ( L.ResortName = H.ResortName OR L.ResortName IS NULL ) cross apply( select COUNT(TH2.Id) AS [Count] FROM tblTourHotel TH2 where TH2.TourId = TH.TourId ) TourHotelCount where TourHotelCount.[Count] = @LocationCount group by T.Id, T.TourRef, T.Description, T.DepartureDate, T.NumNights, T.DepartureAirportId, T.DestinationAirportId, T.AirlineId, T.FEPrice having COUNT(distinct TH.Id) = @LocationCount

    Read the article

  • Running a process at the Windows 7 Welcome Screen

    - by peelman
    So here's the scoop: I wrote a tiny C# app a while back that displays the hostname, ip address, imaged date, thaw status (we use DeepFreeze), current domain, and the current date/time, to display on the welcome screen of our Windows 7 lab machines. This was to replace our previous information block, which was set statically at startup and actually embedded text into the background, with something a little more dynamic and functional. The app uses a Timer to update the ip address, deepfreeze status, and clock every second, and it checks to see if a user has logged in and kills itself when it detects such a condition. If we just run it, via our startup script (set via group policy), it holds the script open and the machine never makes it to the login prompt. If we use something like the start or cmd commands to start it off under a separate shell/process, it runs until the startup script finishes, at which point Windows seems to clean up any and all child processes of the script. We're currently able to bypass that using psexec -s -d -i -x to fire it off, which lets it persist after the startup script is completed, but can be incredibly slow, adding anywhere between 5 seconds and over a minute to our startup time. We have experimented with using another C# app to start the process, via the Process class, using WMI Calls (Win32_Process and Win32_ProcessStartup) with various startup flags, etc, but all end with the same result of the script finishing and the info block process getting killed. I tinkered with rewriting the app as a service, but services were never designed to interact with the desktop, let alone the login window, and getting things operating in the right context never really seemed to work out. So for the question: Does anybody have a good way to accomplish this? Launch a task so that it would be independent of the startup script and run on top of the welcome screen?

    Read the article

  • HTTP 401.3 when PUT, DELETE to ADO.NET Data Service (.svc)

    - by Nate
    I have an ADO.NET Data Service (we'll call it service.svc). When I deploy it to an IIS 6 site with Integrated Windows Authentication turned on, all requests (GET, POST, PUT, and DELETE) work fine for me, because I am an administrator on the box. However, when a non-admin user hits the service, only GET and POST requests work. When they try a PUT or DELETE request, they get an HTTP 401.3 "Access is Denied" error: "Error message 401.3: You do not have permission to view this directory or page using the credentials you supplied (access denied due to Access Control Lists). Ask the web server's administrator to give you access to '...\service.svc'." If I give the "Authenticated Users" local group write access to the .svc file, everything works as it should, but I really don't want to do this (and don't think I should have to do this to get this to work). In fact, I'm confused as to why changing the file permissions would affect this at all, but it definitely seems to be the problem. I've found a couple of different suggestions to fix somewhat similar problems in the Microsoft forums (Here, and I would post more links, but am being told that new users can only post one link in a post), but none of the solutions help. Any help is much appreciated. I am certainly no IIS expert, and this one has got me stumped.

    Read the article

  • how to insert excel-2003 values into SQL2005 database?

    - by vas
    Are there any rules / guidelines for DATA form XLS sheets to be inserted into SQL- DB? I have a group of Excel templates in 2005.Each concerned cell in Excel template is named. When Excel sheets are filled, saved and submitted , the values are transferred to the database. Excel sheets have names for various cells that are to b e filled by the user EX:- for the total number of Milk in the Beginning a given month , there is an Excel Cell Named "mtsBpiPTR180" Total number of Milk in the Ending a given month , there is an Excel Cell Named **"mtsEpiPTR180"** I have added 2 new cells , named "mtsBpiPTR180PA" and "mtsEpiPTR180PA". Now I try to upload the Excel File. But I AM UNABLE TO SEE MY FILLED DATA FROM "mtsBpiPTR180PA" and "mtsEpiPTR180PA" INTO THE RELATED DB/table. The above 2 are empty in the DB/table, even though I have filled them and successfully filed the Excel sheets Now no matter how much I search in the DB/stored procs i am unable to the ACTUAL STORED PROC or how the Data form Excel sheet is inserted into Tables WHERE DATA FROM XLS is inserted into DB. So was wondering:- Are there any rules / guidelines for DATA form XLS sheets to be inserted into SQL- DB?

    Read the article

  • Shared Git repo syncing to svn causing git svn rebase to pollute repo with a log of no-op merge prob

    - by John K
    This wasn't so bad at the beginning, but now I have hundreds of no-op merge problems (solved by git rebase --skip). I have setup a shared git repo for my group because it is easier to deal with. But the company uses SVN so I have to keep SVN in sync with GIT. Worked like a dream at first, but after weeks of doing this GIT is giving me a lot of the following errors. Applying: * making all config actions work Using index info to reconstruct a base tree... Falling back to patching base and 3-way merge... Auto-merging app/controllers/vulnerabilities_controller.rb CONFLICT (content): Merge conflict in app/controllers/vulnerabilities_controller.rb Auto-merging public/javascripts/network_analysis_vulnerability_config.js CONFLICT (content): Merge conflict in public/javascripts/network_analysis_vulnerability_config.js Failed to merge in the changes. Patch failed at 0046 * making all config actions work My workflow: git co master git pull origin git svn rebase ... deal with no-op merge problems ... git svn dcommit git pull origin git push origin The problem is that what is in SVN is the correct so I use git rebase --skip, but I have to do that hundreds of times before I can dcommit. How do I clear these merge problems permanently?

    Read the article

  • INSERT INTO ...SELECT syntax error in join operator

    - by user1477356
    I'm trying to write a shopping basket into a order + orderline in a sql database from C# asp.net. the orderline will contain a ordernumber, total price, productid, quantity etc. for every item in the basket. The order itself will contain the ordernumber as primary key and will be linked to the different lines through it. Everything worked fine yesterday, but now as i tried to use a SELECT command in the insert into statement to get things more dynamic i'm getting the above described syntax error. Does anybody know what's wrong with this statement: INSERT INTO [order] (klant_id,totaalprijs,btw,subtotaal,verzendkosten) SELECT klant.id , SUM(orderregel.totaalprijs) , SUM(orderregel.btw) , SUM(orderregel.totaalprijs) - SUM(orderregel.btw) , 7.50 FROM orderregel INNER JOIN klant ON [order].klant_id = klant.id WHERE klant.username = 'jerry' GROUP BY id; the ordernumber in the "order" table is on autonumber, in the asp codebehind there is a for each which handles the lines being written for every product, there's an index set on 0 outside of this loop and is heightened with 1 every end of it. The executenonquery of the order is only executed once at the beginning of the first loop and the lines are added after with MAX(ordernumber) as ordernumber. I hope i have provided enough information and somebody is capable of helping me. Thanks in advance!

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >