Search Results

Search found 2042 results on 82 pages for 'average'.

Page 66/82 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Problem with JMX query of Coherence node MBeans visible in JConsole

    - by Quinn Taylor
    I'm using JMX to build a custom tool for monitoring remote Coherence clusters at work. I'm able to connect just fine and query MBeans directly, and I've acquired nearly all the information I need. However, I've run into a snag when trying to query MBeans for specific caches within a cluster, which is where I can find stats about total number of gets/puts, average time for each, etc. The MBeans I'm trying to access programatically are visible when I connect to the remote process using JConsole, and have names like this: Coherence:type=Cache,service=SequenceQueue,name=SEQ%GENERATOR,nodeId=1,tier=back It would make it more flexible if I can dynamically grab all type=Cache MBeans for a particular node ID without specifying all the caches. I'm trying to query them like this: QueryExp specifiedNodeId = Query.eq(Query.attr("nodeId"), Query.value(nodeId)); QueryExp typeIsCache = Query.eq(Query.attr("type"), Query.value("Cache")); QueryExp cacheNodes = Query.and(specifiedNodeId, typeIsCache); ObjectName coherence = new ObjectName("Coherence:*"); Set<ObjectName> cacheMBeans = mBeanServer.queryMBeans(coherence, cacheNodes); However, regardless of whether I use queryMBeans() or queryNames(), the query returns a Set containing... ...0 objects if I pass the arguments shown above ...0 objects if I pass null for the first argument ...all MBeans in the Coherence:* domain (112) if I pass null for the second argument ...every single MBean (128) if I pass null for both arguments The first two results are the unexpected ones, and suggest a problem in the QueryExp I'm passing, but I can't figure out what the problem is. I even tried just passing typeIsCache or specifiedNodeId for the second parameter (with either coherence or null as the first parameter) and I always get 0 results. I'm pretty green with JMX — any insight on what the problem is? (FYI, the monitoring tool will be run on Java 5, so things like JMX 2.0 won't help me at this point.)

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Is there a way to avoid spaghetti code over the years?

    - by Yoni Roit
    I've had several programming jobs. Each one with 20-50 developers, project going on for 3-5 years. Every time it's the same. Some programmers are bright, some are average. Everyone has their CS degree, everyone read design patterns. Intentions are good, people are trying hard to write good code but still after a couple of years the code turns into spaghetti. Changes in module A suddenly break module B. There are always these parts of code that no one can understand except for the person who wrote it. Changing infrastructure is impossible and backwards compatibility issues prevent good features to get in. Half of the time you just want to rewrite everything from scratch. And people more experienced than me treat this as normal. Is it? Does it have to be? What can I do to avoid this or should I accept it as a fact of life? Edit: Guys, I am impressed with the amount and quality of responses here. This site and its community rock!

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • Teach Markup or use a WYSIWYG editor?

    - by Atomiton
    When it comes to WYSIWYG editors WYSI rarely WYG. The problem I always have is when people paste in formatted text from word. Ideally, what I'm looking for is a way for people to input text into the document while at the same time teaching them structure... I just don't know if that's a realistic goal ( compared to cut n' paste ) I'm curious if people have found using something other than WYSIWYG editors ( take SO, for example ) has worked for REAL WORLD USERS. I'm not talking about programmers, developers and experience internet users... I'm talking about your average user. I'd be interested in best practices when it comes to getting users to enter content... and I'd love it if someone could point me to some good editors/examples. there are lots of choices when it comes to WYSIWYG ( ckEditor, FreeTextBox, TinyMCE ) but I don't hear a lot about SO-like techniques. Does adding that small barrier scares users away? Is it too difficult to teach people to mark up their text? Is it easier to teach them html? Is a BBCode implementation a good idea? What are some Pros/Cons to wysiwyg/markup. What approach have others used?

    Read the article

  • ASP.NET webservice API security.

    - by Tejaswi Yerukalapudi
    Hi, I have an iPhone app accessing an ASP.NET Webservice for data. Since I'm building both the ASP.NET end and the iPhone part of the app, and we'll shortly be publishing it in the Appstore, I'd like to know what security checks I need to make. The basic flow of the program (without divulging too much info about it) is as follows - . Login (Enter Username, pass on the app) . Primary screen where the data is loaded from a webservice and presented . And post data back after a few updates by the user I'm using POST to send the data to the Webservice via HTTPS. I'm sanitizing the inputs, checking for length of the inputs, but that's the limit of my knowledge as far as security goes. Any other tips are greatly appreciated! Edit: I should probably add that our service needs to be subscribed to separately and the iPhone component of it cannot be used alone. So the average user will not have login credentials. And the app itself has healthcare data in it, so I'd rather not have anyone trying attacks from my login page. Thanks, Teja.

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Help with MySQL Join Statement

    - by JasonS
    Hi, I just built a website and have realised that I need to have a top 3 highest rated albums.. I haven't built in something that keeps track of the ratings. Ratings are stored separately. Can someone show me how to put these together please. SELECT id, name FROM albums LIMIT 3 SELECT rating FROM ratings WHERE url=CONCAT('albums/show/', album.id) Let me just flesh it out a bit. I need to get back the following: From the albums table. id, name. From the ratings table I need to get back the average rating. ROUND((rating+rating+rating) / total ratings) The ratings. Users can rate everything on my website so I have a generic ratings table. The rating is stored with the url of the page it applies to. Hence, to get album ratings I need to have 'albums/show/{album_id}'. In hind sight I should have had a type and id field but it is a bit late now with a lunch iminient. Any help is much appreciated.

    Read the article

  • Algorithm To Select Most Popular Places from Database

    - by Russell C.
    We have a website that contains a database of places. For each place our users are able to take one of the follow actions which we record: VIEW - View it's profile RATING - Rate it on a scale of 1-5 stars REVIEW - Review it COMPLETED - Mark that they've been there WISH LIST - Mark that they want to go there FAVORITE - Mark that it's one of their favorites In our database table of places each place contains a count of the number of times each action above was taken as well as the average rating given by users. views ratings avg_rating completed wishlist favorite What we want to be able to do is generate lists of the top places using the above information. Ideally, we would want to be able to generate this list using a relatively simple SQL query without needing to do any legwork to calculate additional fields or stack rank places against one another. That being said, since we only have about 50,000 places we could run a nightly cron job to calculate some fields such as rankings on different categories if it would make a meaningful difference in the overall results of our top places. I'd appreciate if you could make some suggestions on how we should think about bubbling the best places to the top, which criteria we should weight more heavily, and given that information - suggest what the MySQL query would need to look like in order to select the top 10 places. One thing to note is that at this time we are less concerned with the recency of a place being popular - meaning that looking at the aggregate information is fine and that more recent data doesn't need to be weighted more heavily. Thanks in advance for your help & advice!

    Read the article

  • Django aggregate query generating SQL error

    - by meepmeep
    I'm using Django 1.1.1 on a SQL Server 2005 db using the latest sqlserver_ado library. models.py includes: class Project(models.Model): name = models.CharField(max_length=50) class Thing(models.Model): project = models.ForeignKey(Project) reference = models.CharField(max_length=50) class ThingMonth(models.Model): thing = models.ForeignKey(Thing) timestamp = models.DateTimeField() ThingMonthValue = models.FloatField() class Meta: db_table = u'ThingMonthSummary' In a view, I have retrieved a queryset called 'things' which contains 25 Things: things = Thing.objects.select_related().filter(project=1).order_by('reference') I then want to do an aggregate query to get the average ThingMonthValue for the first 20 of those Things for a certain period, and the same value for the last 5. For the first 20 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[:20],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] This works fine, and returns the desired value. For the last 5 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[20:],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] But for this I get an SQL error: 'Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.' The SQL query being used by django reads: SELECT AVG([ThingMonthSummary].[ThingMonthValue]) AS [ThingMonthValue__avg] FROM [ThingMonthSummary] WHERE ([ThingMonthSummary].[thing_id] IN (SELECT _row_num, [id] FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY [AAAA].[id] ASC) as _row_num, [AAAA].[id] FROM ( SELECT U0.[id] FROM [Thing] U0 WHERE U0.[project_id] = 1 ) AS [AAAA]) as QQQ where 20 < _row_num) AND [ThingMonthSummary].[timestamp] BETWEEN '01/01/09 00:00:00' and '03/01/10 00:00:00') Any idea why it works for one slice of the Things and not the second? I've checked and the two slices do contain the desired Things correctly.

    Read the article

  • Search implementation dilemma: full text vs. plain SQL

    - by Ethan
    I have a MySQL/Rails app that needs search. Here's some info about the data: Users search within their own data only, so searches are narrowed down by user_id to begin with. Each user will have up to about five thousand records (they accumulate over time). I wrote out a typical user's records to a text file. The file size is 2.9 MB. Search has to cover two columns: title and body. title is a varchar(255) column. body is column type text. This will be lightly used. If I average a few searches per second that would be surprising. It's running an a 500 MB CentOS 5 VPS machine. I don't want relevance ranking or any kind of fuzziness. Searches should be for exact strings and reliably return all records containing the string. Simple date order -- newest to oldest. I'm using the InnoDB table type. I'm looking at plain SQL search (through the searchlogic gem) or full text search using Sphinx and the Thinking Sphinx gem. Sphinx is very fast and Thinking Sphinx is cool, but it adds complexity, a daemon to maintain, cron jobs to maintain the index. Can I get away with plain SQL search for a small scale app?

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • Using outer query result in a subquery in postgresql

    - by brad
    I have two tables points and contacts and I'm trying to get the average points.score per contact grouped on a monthly basis. Note that points and contacts aren't related, I just want the sum of points created in a month divided by the number of contacts that existed in that month. So, I need to sum points grouped by the created_at month, and I need to take the count of contacts FOR THAT MONTH ONLY. It's that last part that's tricking me up. I'm not sure how I can use a column from an outer query in the subquery. I tried something like this: SELECT SUM(score) AS points_sum, EXTRACT(month FROM created_at) AS month, date_trunc('MONTH', created_at) + INTERVAL '1 month' AS next_month, (SELECT COUNT(id) FROM contacts WHERE contacts.created_at <= next_month) as contact_count FROM points GROUP BY month, next_month ORDER BY month So, I'm extracting the actual month that my points are being summed, and at the same time, getting the beginning of the next_month so that I can say "Get me the count of contacts where their created at is < next_month" But it complains that column next_month doesn't exist This is understandable as the subquery knows nothing about the outer query. Qualifying with points.next_month doesn't work either. So can someone point me in the right direction of how to achieve this? Tables: Points score | created_at 10 | "2011-11-15 21:44:00.363423" 11 | "2011-10-15 21:44:00.69667" 12 | "2011-09-15 21:44:00.773289" 13 | "2011-08-15 21:44:00.848838" 14 | "2011-07-15 21:44:00.924152" Contacts id | created_at 6 | "2011-07-15 21:43:17.534777" 5 | "2011-08-15 21:43:17.520828" 4 | "2011-09-15 21:43:17.506452" 3 | "2011-10-15 21:43:17.491848" 1 | "2011-11-15 21:42:54.759225" sum, month and next_month (without the subselect) sum | month | next_month 14 | 7 | "2011-08-01 00:00:00" 13 | 8 | "2011-09-01 00:00:00" 12 | 9 | "2011-10-01 00:00:00" 11 | 10 | "2011-11-01 00:00:00" 10 | 11 | "2011-12-01 00:00:00"

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • R: How to remove outliers from a smoother in ggplot2?

    - by John
    I have the following data set that I am trying to plot with ggplot2, it is a time series of three experiments A1, B1 and C1 and each experiment had three replicates. I am trying to add a stat which detects and removes outliers before returning a smoother (mean and variance?). I have written my own outlier function (not shown) but I expect there is already a function to do this, I just have not found it. I've looked at stat_sum_df("median_hilow", geom = "smooth") from some examples in the ggplot2 book, but I didn't understand the help doc from Hmisc to see if it removes outliers or not. Is there a function to remove outliers like this in ggplot, or where would I amend my code below to add my own function? library (ggplot2) data = data.frame (day = c(1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7), od = c( 0.1,1.0,0.5,0.7 ,0.13,0.33,0.54,0.76 ,0.1,0.35,0.54,0.73 ,1.3,1.5,1.75,1.7 ,1.3,1.3,1.0,1.6 ,1.7,1.6,1.75,1.7 ,2.1,2.3,2.5,2.7 ,2.5,2.6,2.6,2.8 ,2.3,2.5,2.8,3.8), series_id = c( "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1"), replicate = c( "A1.1","A1.1","A1.1","A1.1", "A1.2","A1.2","A1.2","A1.2", "A1.3","A1.3","A1.3","A1.3", "B1.1","B1.1","B1.1","B1.1", "B1.2","B1.2","B1.2","B1.2", "B1.3","B1.3","B1.3","B1.3", "C1.1","C1.1","C1.1","C1.1", "C1.2","C1.2","C1.2","C1.2", "C1.3","C1.3","C1.3","C1.3")) > data day od series_id replicate 1 1 0.10 A1 A1.1 2 3 1.00 A1 A1.1 3 5 0.50 A1 A1.1 4 7 0.70 A1 A1.1 5 1 0.13 A1 A1.2 6 3 0.33 A1 A1.2 7 5 0.54 A1 A1.2 8 7 0.76 A1 A1.2 9 1 0.10 A1 A1.3 10 3 0.35 A1 A1.3 11 5 0.54 A1 A1.3 12 7 0.73 A1 A1.3 13 1 1.30 B1 B1.1 This is what I have so far and is working nicely, but outliers are not removed: r <- ggplot(data = data, aes(x = day, y = od)) r + geom_point(aes(group = replicate, color = series_id)) + # add points geom_line(aes(group = replicate, color = series_id)) + # add lines geom_smooth(aes(group = series_id)) # add smoother, average of each replicate

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • use exec for dsadd

    - by Daryl Gill
    I'm Programming on a Windows Server 2008 and I wish to have a WebUI to interact with the domains active directory. One of my main problems is this that i'm using dsadd from a HTML form but this is no succeeding. I know my command is correct, I have tested it out on the Servers Command line My Code is As Below: if (isset($_POST['Submit'])) { $DesiredUsername = $_POST['DesiredUsername']; $DesiredPassword = $_POST['DesiredPassword']; $DU = "{$DesiredUsername}"; // Desired Username $OU = "PHPCreatedUsers"; // Domain OU $DC1 = "slayerserv"; // Domain Part one $DC2 = "local"; // Domain Part Two $PWD = "{$DesiredPassword}"; // Password $ExecScript = 'dsadd user cn=$DesiredUsername,cn=PHPCreatedUsers,dc=slayerserv,dc=local -disabled no -pwd $DesiredPassword -mustchpwd yes'; exec($ExecScript, $output); mysql_query("INSERT INTO addedusers (`ID`, `DU`, `OU`, `DC1`, `DC2, `PWD`) VALUES ('', '$DU', '$OU', '$DC1', '$DC2', '$PWD')"); echo "<br><br>"; print_r($output); # echo "User: $DesiredUsername Has been Created"; } When I print_r($output); it Returns a blank array: Array ( ) Could anyone provide me with a solution or point me in the right direction? ++++ Below is a working example of my usage of exec $Script = 'ping 127.0.0.1 -n 1'; exec($Script, $Output); print_r($Output); print_r($Output); Gives: Array ( [0] = [1] = Pinging 127.0.0.1 with 32 bytes of data: [2] = Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 [3] = [4] = Ping statistics for 127.0.0.1: [5] = Packets: Sent = 1, Received = 1, Lost = 0 (0% loss), [6] = Approximate round trip times in milli-seconds: [7] = Minimum = 0ms, Maximum = 0ms, Average = 0ms )

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • SO what RDF database do i use for product attribute situation initially i thought about using EAV?

    - by keisimone
    Hi, i have a similar issue as espoused in http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters i am convinced to use RDF now. only because of one of the comments made by Bill Karwin in the answer to the above issue but i already have a database in mysql and the code is in php. 1) So what RDF database should I use? 2) do i combine the approach? meaning i have a class table inheritance in the mysql database and just the weird product attributes in the RDF? I dont think i should move everything to a RDF database since it is only just products and the wide array of possible attributes and value that is giving me the problem. 3) what php resources, articles should i look at that will help me better in the creation of this? 4) more articles or resources that helps me to better understand RDF in the context of the above challenge of building something that will better hold all sorts of products' attributes and values will be greatly appreciated. i tend to work better when i have a conceptual understanding of what is going on. Do bear in mind i am a complete novice to this and my knowledge of programming and database is average at best.

    Read the article

  • Advice on a DB that can be uploaded to a website by a smart client for collecting survey feedback

    - by absfabs
    Hello, I'm hoping you can help. I'm looking for a zero config multi-user datbase that my winforms application can easily upload to a webserver folder (together with 1 or 2 classic asp pages) and am looking for some suggestions/recommendations. The idea is that the database will be used to collect feedback entered by people filling in the asp pages. The pages will write to the database using javascript. The database will subsequently be downloaded again for processing once the responses are in. In Summary: It will mostly run in MS Windows environments. I have a modest budget for this and do not mind paying for such a database. No runtime licensing costs. Should be xcopy - Once uploaded to a website folder it should be operational. It should not have a dotnet CLR dependency. It should support a resonable level of concurrent access. Average respondent count would be around 20-30 but one never knows. Should be a reasonable size so that uploads/downloads to and from the site will be reasonably fast. Would appreciate your suggestions/comments Many thanks Abz To clarify - this is a desktop commercial application for feedback management in a vertical market. It uses SQL Server as the backing store. The application currently provides feedback management from email and paper feedback. I now want to add web feedback capability. Getting users to to make their SQL servers accessible to a website is not at option at this time as I am want to make getting up and running as painless as possible. I intend to release a web based implementation of the software in the near future but for now am looking at the above as a pragmatic way to provide web based feedback collection.

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >