Search Results

Search found 14378 results on 576 pages for 'record count'.

Page 224/576 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • how to find by date from timestamp column in JPA criteria

    - by Kre Toni
    I want to find a record by date. In entity and database table datatype is timestamp. I used Oracle database. @Entity public class Request implements Serializable { @Id private String id; @Version private long version; @Temporal(TemporalType.TIMESTAMP) @Column(name = "CREATION_DATE") private Date creationDate; public Request() { } public Request(String id, Date creationDate) { setId(id); setCreationDate(creationDate); } public String getId() { return id; } public void setId(String id) { this.id = id; } public long getVersion() { return version; } public void setVersion(long version) { this.version = version; } public Date getCreationDate() { return creationDate; } public void setCreationDate(Date creationDate) { this.creationDate = creationDate; } } in mian method public static void main(String[] args) { RequestTestCase requestTestCase = new RequestTestCase(); EntityManager em = Persistence.createEntityManagerFactory("Criteria").createEntityManager(); em.getTransaction().begin(); em.persist(new Request("005",new Date())); em.getTransaction().commit(); Query q = em.createQuery("SELECT r FROM Request r WHERE r.creationDate = :creationDate",Request.class); q.setParameter("creationDate",new GregorianCalendar(2012,12,5).getTime()); Request r = (Request)q.getSingleResult(); System.out.println(r.getCreationDate()); } in oracle database record is ID CREATION_DATE VERSION 006 05-DEC-12 05.34.39.200000 PM 1 Exception is Exception in thread "main" javax.persistence.NoResultException: getSingleResult() did not retrieve any entities. at org.eclipse.persistence.internal.jpa.EJBQueryImpl.throwNoResultException(EJBQueryImpl.java:1246) at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:750) at com.ktrsn.RequestTestCase.main(RequestTestCase.java:29)

    Read the article

  • MySQL Join Question

    - by rbaker86
    Hi i'm struggling to write a particular MySQL Join Query. I have a table containing product data, each product can belong to multiple categories. This m:m relationship is satisfied using a link table. For this particular query I wish to retrieve all products belonging to a given category, but with each product record, I also want to return the other categories that product belongs to. Ideally I would like to achieve this using an Inner Join on the categories table, rather than performing an additional query for each product record, which would be quite inefficient. My simplifed schema is designed roughly as follows: products table: product_id, name, title, description, is_active, date_added, publish_date, etc.... categories table: category_id, name, title, description, etc... product_category table: product_id, category_id I have written the following query, which allows me to retrieve all the products belonging to the specified category_id. However, i'm really struggling to work out how to retrieve the other categories a product belongs to. SELECT p.product_id, p.name, p.title, p.description FROM prod_products AS p LEFT JOIN prod_product_category AS pc ON pc.product_id = p.product_id WHERE pc.category_id = $category_id AND UNIX_TIMESTAMP(p.publish_date) < UNIX_TIMESTAMP() AND p.is_active = 1 ORDER BY p.name ASC I'd be happy just retrieving the category id's releated to each returned product row, as I will have all category data stored in an object, and my application code can take care of the rest. Many thanks, Richard

    Read the article

  • How to force client browser to download images from server rather using its cache

    - by anonim.developer
    Assume a simple aspx data entry page in which admin user can upload an image as well as some other data. They are stored in database and the next time admin visits that page to edit record, image data fetched and a preview generated and saved to disk (using GDI+) and the preview is shown in an image control. This procedure works fine for the first time however if the image changes (a new one uploaded) the next time the page is surfed it shows previously uploaded image. I debugged the application and everything works correct. The new image data is in database and new preview is stored in Temp location however the page shows previous one. If I refresh the page it shows the new image preview. I should mention that preview is always saved to disk with one name (id of each record as the name). I think that is because of IE and other browsers use client cache instead of loading images each time a page is surfed. I wonder if there is a way to force the client browser to refresh itself so the newly uploaded image is shown without user intervention. Thanks and appreciation in advance,

    Read the article

  • Insert Into Two SQL Tables From XML Maintaining Relationship

    - by Thx
    I am looking to insert records from xml into two different tables. For example <Root> <A> <AValue>1</AValue> <Children> <B> <BValue>2</BValue> </B> </Children> </A> </Root> Would insert a record into table A AID AValue # 1 also insert a record into table B BID AID BValue # #(Same as AID Above) 2 I have this DECLARE @idoc INT DECLARE @doc NVARCHAR(MAX) SET @doc = ' <Root> <A> <AValue>1</AValue> <Children> <B> <BValue>2</BValue> </B> </Children> </A> </Root> ' EXEC sp_xml_preparedocument @idoc OUTPUT, @doc CREATE TABLE #A ( AID INT IDENTITY(1, 1) , AValue INT ) INSERT INTO #A SELECT * FROM OPENXML (@idoc, '/Root/A',2) WITH (AValue INT ) CREATE TABLE #B ( BID INT IDENTITY(1, 1) , AID INT , BValue INT ) INSERT INTO #B SELECT * FROM OPENXML (@idoc, '/Root/A/Children/B',2) WITH ( AID INT, BValue INT ) SELECT * FROM #A SELECT * FROM #B DROP TABLE #A DROP TABLE #B Thanks!

    Read the article

  • Customer wants some data to appear after you later delete rows. System giant / not my creation. Fast

    - by John Sullivan
    This is a fairly common problem, it probably has a name, I just don't know what it is. A.) User sees obscure piece of information in Row B of L_OBSCURE_INFO displayed on some screen at a certain point. It is in table L_Obscure_info. B.) Under certain circumstances we want to correctly delete data in L_OBSCURE_INFO. Unfortunately, nobody accounted for the fact that the user might want to backtrack and see some random piece of information that was most recently in L_OBSCURE_INFO. C.) The system is enormous and L_OBSCURE_INFO is used all the time. You have no idea what the ramifications are of implementing some kind of hack and whatever you do, you don't want to introduce more bugs. I think the best approach would be to create an L_OBSCURE_INFO_HISTORY table and record a record in there every time you change data. But god help your ensuring it's accurate in this system where L_OBSCURE_INFO is being touched everywhere and you don't have time to implement L_OBSCURE_INFO_HISTORY. Is there a particularly easy, clever design solution for this kind of problem -- basically an elegant database hack? If not, is this kind of design problem under a particular class of problems or have a name?

    Read the article

  • ASP.NET javascript embed in template column

    - by Mahesh
    Hi, I am developing a web page in which a rad grid displays the list of exams. I included a template column which shows count down timer when the exam is going to expire. Code is as given below: <telerik:RadGrid ID="radGrid" runat="server" AutoGenerateColumns="false"> <MasterTableView> <Columns> <telerik:GridTemplateColumn HeaderText="template" DataField="Date"> <ItemTemplate> <script language="JavaScript" type="text/javascript"> TargetDate = '<%# Eval("Date") %>'; BackColor = "white"; ForeColor = "black"; CountActive = true; CountStepper = -1; LeadingZero = true; DisplayFormat = "%%D%% Days, %%H%% Hours, %%M%% Minutes, %%S%% Seconds."; FinishMessage = "It is finally here!"; </script> <script language="JavaScript" src="http://scripts.hashemian.com/js/countdown.js" type="text/javascript"></script> </ItemTemplate> </telerik:GridTemplateColumn> </Columns> </MasterTableView> </telerik:RadGrid> I am giving DataTable as datasource to this grid. But my problem is , the template column is showing data only for the first record and the value taken is from the last row in the DataTable. For Ex: If I give data as given below, I can see 3 records but with only the first record displaying the counter with last value(10/10/2010 05:43 PM). 02/02/2011 01:00 AM 08/09/2010 11:00 PM 10/10/2010 05:43 PM Could you please help in this?? Thanks, Mahesh

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • getting active records to display as a plist

    - by phil swenson
    I'm trying to get a list of active record results to display as a plist for being consumed by the iphone. I'm using the plist gem v 3.0. My model is called Post. And I want Post.all (or any array or Posts) to display correctly as a Plist. I have it working fine for one Post instance: [http://pastie.org/580902][1] that is correct, what I would expect. To get that behavior I had to do this: class Post < ActiveRecord::Base def to_plist attributes.to_plist end end However, when I do a Post.all, I can't get it to display what I want. Here is what happens: http://pastie.org/580909 I get marshalling. I want output more like this: [http://pastie.org/580914][2] I suppose I could just iterate the result set and append the plist strings. But seems ugly, I'm sure there is a more elegant way to do this. I am rusty on Ruby right now, so the elegant way isn't obvious to me. Seems like I should be able to override ActiveRecord and make result-sets that pull back more than one record take the ActiveRecord::Base to_plist and make another to_plist implementation. In rails, this would go in environment.rb, right?

    Read the article

  • How to lock a transaction for reading a row and then inserting in Hibernate?

    - by at
    I have a table with a name and a name_count. So when I insert a new record, I first check what the maximum name_count is for that name. I then insert the record with that maximum + 1. Works great... except with mysql 5.1 and hibernate 3.5, by default the reads don't respect transaction boundaries. 2 of these inserts for the same name could happen at the same time and end up with the same name_count, which completely screws my application! Unfortunately, there are some specific situations where the above is actually fairly common. So what do I do? I assume I can do a pessimistic lock where any row I read is locked for further reading until I commit or roll-back my transaction. Or I can do an optimistic lock with a version column that automatically keeps trying until there are no conflicts? What's the best approach for my situation and how do I specify it in Hibernate 3.5 and mysql 5.1? The above table is massive and accessed frequently.

    Read the article

  • TV Guide script - getting current date programmes to show

    - by whitstone86
    This is part of my TV guide script: //Connect to the database mysql_connect("localhost","root","PASSWORD"); //Select DB mysql_select_db("mytvguide"); //Select only results for today and future $result = mysql_query("SELECT programme, channel, episode, airdate, expiration, setreminder FROM mediumonair where airdate >= now()"); The episodes show up, so there are no issues there. However, it's getting the database to find data that's the issue. If I add a record for a programme that airs today this should show: Medium showing on TV4 8:30pm "Episode" Set Reminder Medium showing on TV4 May 18th - 6:25pm "Episode 2" Set Reminder Medium showing on TV4 May 18th - 10:25pm "Episode 3" Set Reminder Medium showing on TV4 May 19th - 7:30pm "Episode 3" Set Reminder Medium showing on TV4 May 20th - 1:25am "Episode 3" Set Reminder Medium showing on TV4 May 20th - 6:25pm "Episode 4" Set Reminder but this shows instead: Medium showing on TV4 May 18th - 6:25pm "Episode 2" Set Reminder Medium showing on TV4 May 18th - 10:25pm "Episode 3" Set Reminder Medium showing on TV4 May 19th - 7:30pm "Episode 3" Set Reminder Medium showing on TV4 May 20th - 1:25am "Episode 3" Set Reminder Medium showing on TV4 May 20th - 6:25pm "Episode 4" Set Reminder I almost have the SQL working; just not sure what the right code is here, to avoid the second mistake showing - as the record (which indicates a show currently airing) does not seem to work at present. Please can anyone help me with this? Thanks

    Read the article

  • SQL Server 2008 - Update a temporary table

    - by user336786
    Hello, I have stored procedure in which I am trying to retrieve the last ticket completed by each user listed in a comma-delimited string of usernames. The user may not have a ticket associated with them, in this case I know that i just need to return null. The two tables that I am working with are defined as follows: User ---- UserName, FirstName, LastName Ticket ------ ID, CompletionDateTime, AssignedTo, AssignmentDate, StatusID TicketStatus ------------ ID, Comments I have created a stored procedure in which I am trying to return the last completed ticket for a comma-delimited list of usernames. Each record needs to include the comments associated with it. Currently, I'm trying the following: CREATE TABLE #Tickets ( [UserName] nvarchar(256), [FirstName] nvarchar(256), [LastName] nvarchar(256), [TicketID] int, [DateCompleted] datetime, [Comments] text ) -- This variable is actually passed into the procedure DECLARE @userList NVARCHAR(max) SET @userList='user1,user2,user2' -- Obtain the user information for each user INSERT INTO #Tickets ( [UserName], [FirstName], [LastName] ) SELECT u.[UserName], u.[FirstName], u.[LastName] FROM User u INNER JOIN dbo.ConvertCsvToTable(@userList) l ON u.UserName=l.item At this point, I have the username, first and last name for each user passed in. However, I do not know how to actually get the last ticket completed for each of these users. How do I do this? I believe I should be updating the temp table I have created. At the same time, id do not know how to get just the last record in an update statement. Thank you!

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • Destroy? Delete? What's going on here? Rails 2.3.5

    - by Steve
    I am new to rails. My rails version is 2.3.5. I found usage like: In controller, a destroy method is defined and in view, you can use :action = "delete" to fire that method. Isn't the action name has to be the same as the method name? Why delete is mapped to destroy? Again, in my controller, I define a method called destroy to delete a record. In a view, I have <%= link_to "remove", :action = 'destroy', :id = myrecord %. But it never works in practice. Every time I press the remove link, it redirects me to the show view, showing the record's content. I am pretty sure that my destroy method is: def destroy @myobject = MyObject.find(params[:id]) @myobject.destroy @redirect_to :action = 'index' end If I change the method name from destroy to something like remove_me and change the action name to remove_me in the view, everything works as expected. In the above two wired problems, I am sure there is no tricky rountting set in my configuration. All in all, seems the destroy and delete are mysterious keywords in rails. Anyone can explain this to me? Thank you very much.

    Read the article

  • How to emulate a BEFORE DELETE trigger in SQL Server 2005

    - by Mark
    Let's say I have three tables, [ONE], [ONE_TWO], and [TWO]. [ONE_TWO] is a many-to-many join table with only [ONE_ID and [TWO_ID] columns. There are foreign keys set up to link [ONE] to [ONE_TWO] and [TWO] to [ONE_TWO]. The FKs use the ON DELETE CASCADE option so that if either a [ONE] or [TWO] record is deleted, the associated [ONE_TWO] records will be automatically deleted as well. I want to have a trigger on the [TWO] table such that when a [TWO] record is deleted, it executes a stored procedure that takes a [ONE_ID] as a parameter, passing the [ONE_ID] values that were linked to the [TWO_ID] before the delete occurred: DECLARE @Statement NVARCHAR(max) SET @Statement = '' SELECT @Statement = @Statement + N'EXEC [MyProc] ''' + CAST([one_two].[one_id] AS VARCHAR(36)) + '''; ' FROM deleted JOIN [one_two] ON deleted.[two_id] = [one_two].[two_id] EXEC (@Statement) Clearly, I need a BEFORE DELETE trigger, but there is no such thing in SQL Server 2005. I can't use an INSTEAD OF trigger because of the cascading FK. I get the impression that if I use a FOR DELETE trigger, when I join [deleted] to [ONE_TWO] to find the list of [ONE_ID] values, the FK cascade will have already deleted the associated [ONE_TWO] records so I will never find any [ONE_ID] values. Is this true? If so, how can I achieve my objective? I'm thinking that I'd need to change the FK joining [TWO] to [ONE_TWO] to not use cascades and to do the delete from [ONE_TWO] manually in the trigger just before I manually delete the [TWO] records. But I'd rather not go through all that if there is a simpler way.

    Read the article

  • .NET List Thread-Safe Implementation Suggestion needed

    - by Bamboo
    .Net List class isn't thread safe. I hope to achieve the minimal lock needed and yet still fulfilling the requirement such that as for reading, phantom record is allowed, and for writing, they must be thread-safe so there won't be any lost updates. So I have something like public static List<string> list = new List<string>(); In Methods that have **List.Add**/**List.Remove** , I always lock to assure thread safety lock (lockHelper) { list.Add(obj); or list.Remove(obj); } In Methods that requires **List Reading** I don't care about phantom record so I go ahead to read without any locking. In this case. Return a bool by checking whether a string had been added. if (list.Count() != 0) { return list.Contains("some string") } All I did was locking write accesses, and allow read accesses to go through without any locking. Is my thread safety idea valid? I understand there is List size expansion. Will it be ok? My guess is that when a List is expanding, it may uses a temp. list. This is ok becasue the temp list size will always have a boundary, and .Net class is well implemented, ie. there shouldn't be any indexOutOfBound or circular reference problems when reading was caught in updates.

    Read the article

  • Not getting concept of null

    - by appu
    Hy Guys, Beginning with mysql. I am not able to grasp the concept of NULL. Check screen-shot (*declare_not_null, link*). In it when I specifically declared 'name' field to be NOT NULL. When i run the 'desc test' table command, the table description shows default value for name field to be NULL.Why is that so? From what I have read about NULL, it connotes a missing or information that is not applicable. So when I declare a field to be NOT NULL it implies (as per my understanding) that user must enter a value for the name field else the DB engine should generate an error i.e. record will not be entered in DB. However when i run 'insert into test value();' the DB engine enters the record in table. Check screen-shot(*empty_value, link*). FLICKR LINKS *declare_not_null* http://www.flickr.com/photos/55097319@N03/5302758813/ *empty_values* Check the second screenshot on flickr Q.2 what would be sql statemetn to drop a primary key from a table's field. If I use 'ALTER TABLE test drop key id;' it gives the following: ERROR: Incorrect table definition; there can be only one auto column and it must be defined as a key. Thanks for your help..

    Read the article

  • JQuery ajax success help

    - by Jason
    Hi all, I am implementing a "Quick delete" function into a page I am creating. The way it works is like this: 1: You click the "delete" button in the table row for the record that you want to delete. 2: The page sends a request to the ajax page and return a successfully message of "yes" or a failure message of "no". My issue is that if I get a successful message of "yes" I want to hide the row for that record. I am having issue "finding" the row using JQuery. Here is my jquery code: $(document).ready(function(){ $(".pane .btn-delete").click(function(){ var element = $(this); var del_id = element.attr("id"); var dataString = 'action=del&cid=' + del_id; if(confirm("Are you sure you want to delete this content block?")) { $("#msgbox").addClass('ajaxmsg').text('Checking permissions....').fadeIn(1000); $.ajax({ type: "get", url: "ajax/admArticles_ajax.php", data: dataString, success: function(data){ switch(data) { case "yes": $("#msgbox").addClass('ajaxmsg').text('Deleting content block....').fadeIn(1000); $(this).parents(".pane").animate({ backgroundColor: "#fbc7c7" }, "fast") .animate({ opacity: "hide" }, "slow") break case "no": $("#msgbox").removeClass().addClass('error').text('You do not have the correct permissions to delete this content....').fadeIn(1000); break default: }; } }); } return false; }); }); This is the lines of code I am using to hide the row however it is not working because I don't think $(this).parents(".pane") finds the element. $(this).parents(".pane").animate({ backgroundColor: "#fbc7c7" }, "fast") .animate({ opacity: "hide" }, "slow") Any help would be greatly appreciated. Thanks...

    Read the article

  • Access database Need to prevent from approving overlapping OT.Second Try with modified request Not a programmer [on hold]

    - by user2512764
    Employees Signups on company Website for advance overtime line. Access table already has overtime signups which does not require user to add the time but it requires only to add location as approved. Since this table has field Employee name, Date, start time and End time and location, All the fields has the data except for location. In the data base I have created a form based on this table. Since the table already have most of the information User only has to add location in the form field in order to approve overtime. Once user approves an overtime line for example: User approves overtime for employee name 'John' which starts on 7/1/2013 at 0400-0800, location is successfully added. When user tries to add location for John again which might has the start time for 7/1/2013 at 0600=0900. Again we are not entering Start time, End time and date it is already in the table. we are only entering location as approval. Soon user enters the location for John in the form field, since there is a conflict with previously overtime line which has already been approved. program needs to check employee name, date and time in previously approved (Added location) overtime line and The location in current record needs to be deleted and go to next record. I hope I have explained it in understandable format. Thank You,

    Read the article

  • jquery.clone() and ASP.NET Forms

    - by Jeff
    So I have a page where I would like to be able to add multiple, dynamic users to a record in a database. Here's the rough start page: <div id="records"> <div id="userRecord"> Name: <asp:TextBox runat="server" ID="objNameTextBox"></asp:TextBox> <br /> Phone Number: <asp:TextBox runat="server" ID="objPhoneNumberTextBox"></asp:TextBox> <br /> </div> </div> And the jquery: $(function () { $(".button").button().click(function (event) { addnew(); event.preventDefault(); }); }) function addnew() { $('#userRecord').clone().appendTo('#records'); } So my question is what do I use within ASP.NET to be able to poll all of the data in the form and add a unique record for each #userRecord div within the #records div? Yes - I should change the userRecord to a class - I will deal with that. This is just simple testing here. Should I look in JSON for this type of function? I'm not familiar with it but could figure it out if that is indeed my best option. Thanks for the guidance!

    Read the article

  • Field to display Previous 30 Day Total

    - by whytheq
    I've got this table: CREATE TABLE #Data1 ( [Market] VARCHAR(100) NOT NULL, [Operator] VARCHAR(100) NOT NULL, [Date] DATETIME NOT NULL, [Measure] VARCHAR(100) NOT NULL, [Amount] NUMERIC(36,10) NOT NULL, --new calculated fields [DailyAvg_30days] NUMERIC(38,6) NULL DEFAULT 0 ) I've populated all the fields apart from DailyAvg_30days. This field needs to show the total for the preceding 30 days e.g. 1. if Date for a particular record is 2nd Dec then it will be the total for the period 3rd Nov - 2nd Dec inclusive. 2. if Date for a particular record is 1st Dec then it will be the total for the period 2nd Nov - 1st Dec inclusive. My attempt to try to find these totals before updating the table is as follows: SELECT a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount], [DailyAvg_30days] = SUM(b.[Amount]) FROM #Data1 a INNER JOIN #Data1 b ON a.[Market] = b.[Market] AND a.[Operator] = b.[Operator] AND a.[Measure] = b.[Measure] AND a.[Date] >= b.[Date]-30 AND a.[Date] <= b.[Date] GROUP BY a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount] ORDER BY 1,2,4,3 Is this a valid approach or do I need to approach this from a different angle?

    Read the article

  • Why my tracking service freezes when the phone moves?

    - by user2878181
    I have developed a service which includes timer task and runs after every 5 minutes for keeping tracking record of the device, every five minutes it adds a record to the database. My service is working fine when the phone is not moving i.e it gives records after every 5 minutes as it should be. But i have noticed that when the phone is on move it updates the points after 10 or 20 minutes , i.e whenever the user stops in his way whenever he is on the move. Do service freezes on the move, if yes! how is whatsapp messenger managing it?? Please help! i am writing my onstart method. please help @Override public void onStart(Intent intent, int startId) { Toast.makeText(this, "My Service Started", Toast.LENGTH_LONG).show(); Log.d(TAG, "onStart"); mLocationClient.connect(); final Handler handler_service = new Handler(); timer_service = new Timer(); TimerTask thread_service = new TimerTask() { @Override public void run() { handler_service.post(new Runnable() { @Override public void run() { try { some function of tracking } }); } }; timer_service.schedule(thread_service, 1000, service_timing); //sync thread final Handler handler_sync = new Handler(); timer_sync = new Timer(); TimerTask thread_sync = new TimerTask() { @Override public void run() { handler_sync.post(new Runnable() { @Override public void run() { try { //connecting to the central server for updation Connect(); } catch (Exception e) { // TODO Auto-generated catch block } } }); } }; timer_sync.schedule(thread_sync,2000, sync_timing); }

    Read the article

  • Bad disk performance on HP DL360 with Smarty Array P400i RAID controller

    - by sarge
    I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 35.2919 s, 29.0 MB/s real 0m36.570s user 0m0.476s sys 0m32.298s The server isn't very loaded and the VMware Infrastructure Client performance monitor is showing 550KBps average read and 1208KBps average write for the last 30 minutes (highest write rate: 6.6MBps). This has been a problem from the start. Any ideas?

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • Grant access for users on a separate domain to SharePoint

    - by Geo Ego
    Hello. I just completed development of a SharePoint site on a virtual server and am currently in the process of granting users from a different domain to the site. The SharePoint domain is SHAREPOINT, and the domain with the users I want to give access to is COMPANY. I have provided them with a link to the site and added them as users via SharePoint, which is all I thought I would need to do. However, when they go to the link, the site shows them a SharePoint error page. In the security event log, I am showing the following: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:11:49 AM User: COMPANY\ThisUser Computer: SHAREPOINT Description: Object Open: Object Server: Security Account Manager Object Type: SAM_ALIAS Object Name: DOMAINS\Account\Aliases\00000404 Handle ID: - Operation ID: {0,1719489} Process ID: 416 Image File Name: C:\WINDOWS\system32\lsass.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: ThisUser Client Domain: PRINTRON Client Logon ID: (0x0,0x1A3BC2) Accesses: AddMember RemoveMember ListMembers ReadInformation Privileges: - Restricted Sid Count: 0 Access Mask: 0xF Then, four of these in a row: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:12:08 AM User: NT AUTHORITY\NETWORK SERVICE Computer: SHAREPOINT Description: Object Open: Object Server: SC Manager Object Type: SERVICE OBJECT Object Name: WinHttpAutoProxySvc Handle ID: - Operation ID: {0,1727132} Process ID: 404 Image File Name: C:\WINDOWS\system32\services.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: NETWORK SERVICE Client Domain: NT AUTHORITY Client Logon ID: (0x0,0x3E4) Accesses: Query status of service Start the service Query information from service Privileges: - Restricted Sid Count: 0 Access Mask: 0x94 Any ideas what permissions I need to grant to the user to get them access to SharePoint?

    Read the article

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >