Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 178/193 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Where's my memory?! Nginx + PHP-FPM front end webserver slows to a crawl...

    - by incredimike
    I'm not sure if I have a problem with a memory leak (as my hosting company suggests), or if we both need to read http://linuxatemyram.com. Maybe you clever people can help us out? This is a front-end webserver VM running essentially only nginx & php-fpm on RHEL 5.5. This server is powering Magento, a PHP eCommerce thinggy. The server is running in a shared environment, but we're changing that soon. Anyway.. after a reboot the server runs just fine, but within a day it will grind itself into nothingness. Pages will take literally 2 minutes to load, CPU spikes like crazy, etc.. The console is even sluggish when I SSH in. It's like my whole server is being brought to its knees. I've also been monitoring the DB server via top and tcpdumping incoming traffic. The DB stays idle for a good portion of that "slow" load time. When i start seeing queries coming from the front-end server, the page loads soon afterward. Here are some stats after me logging in during a slow-down, after restarting php-fpm: [mike@front01 ~]$ free -m total used free shared buffers cached Mem: 5963 5217 745 0 192 314 -/+ buffers/cache: 4711 1252 Swap: 4047 4 4042 [mike@front01 ~]$ top top - 11:38:55 up 2 days, 1:01, 3 users, load average: 0.06, 0.17, 0.21 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 6106800k total, 5361288k used, 745512k free, 199960k buffers Swap: 4144728k total, 4976k used, 4139752k free, 328480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31806 apache 15 0 601m 120m 37m S 0.0 2.0 0:22.23 php-fpm 31805 apache 15 0 549m 66m 31m S 0.0 1.1 0:14.54 php-fpm 31809 apache 16 0 547m 65m 32m S 0.0 1.1 0:12.84 php-fpm 32285 apache 15 0 546m 63m 33m S 0.0 1.1 0:09.22 php-fpm 32373 apache 15 0 546m 62m 32m S 0.0 1.1 0:09.66 php-fpm 31808 apache 16 0 543m 60m 35m S 0.0 1.0 0:18.93 php-fpm 31807 apache 16 0 533m 49m 30m S 0.0 0.8 0:08.93 php-fpm 32092 apache 15 0 535m 48m 27m S 0.0 0.8 0:06.67 php-fpm 4392 root 18 0 194m 10m 7184 S 0.0 0.2 0:06.96 cvd 4064 root 15 0 154m 8304 4220 S 0.0 0.1 3:55.57 snmpd 4394 root 15 0 119m 5660 2944 S 0.0 0.1 0:02.84 EvMgrC 31804 root 15 0 519m 5180 932 S 0.0 0.1 0:00.46 php-fpm 4138 ntp 15 0 23396 5032 3904 S 0.0 0.1 0:02.38 ntpd 643 nginx 15 0 95276 4408 1524 S 0.0 0.1 0:01.15 nginx 5131 root 16 0 90128 3340 2600 S 0.0 0.1 0:01.41 sshd 28467 root 15 0 90128 3340 2600 S 0.0 0.1 0:00.35 sshd 32602 root 16 0 90128 3332 2600 S 0.0 0.1 0:00.36 sshd 1614 root 16 0 90128 3308 2588 S 0.0 0.1 0:00.02 sshd 2817 root 5 -10 7216 3140 1724 S 0.0 0.1 0:03.80 iscsid 4161 root 15 0 66948 2340 800 S 0.0 0.0 0:10.35 sendmail 1617 nicole 17 0 53876 2000 1516 S 0.0 0.0 0:00.02 sftp-server ... Is there anything else I should be looking at, or any more information that might be useful? I'm just a developer, but the slowdowns on this system worry me and make it hard to do my work.. Help me out, ServerFault!

    Read the article

  • amplified reflected attack on dns

    - by Mike Janson
    The term is new to me. So I have a few questions about it. I've heard it mostly happens with DNS servers? How do you protect against it? How do you know if your servers can be used as a victim? This is a configuration issue right? my named conf file include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }; }; options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */ // query-source port 53; /* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */ // Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ allow-transfer {"none";}; }; logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; /* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above :

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • GROUP_CONCAT in CodeIgniter

    - by mickaelb91
    I'm just blocking to how create my group_concat with my sql request in CodeIgniter. All my queries are listed in a table, using Jtable library. All work fine, except when I try to insert GROUP_CONCAT. Here's my model page : function list_all() { $login_id = $this->session->userdata('User_id'); $this->db->select('p.project_id, p.Project, p.Description, p.Status, p.Thumbnail, t.Template'); $this->db->from('assigned_projects_ppeople a'); $this->db->where('people_id', $login_id); $this->db->join('projects p', 'p.project_id = a.project_id'); $this->db->join('project_templates t', 't.template_id = p.template_id'); $this->db->select('GROUP_CONCAT(u.Asset SEPARATOR ",") as assetslist', FALSE); $this->db->from('assigned_assets_pproject b'); $this->db->join('assets u', 'u.asset_id = b.asset_id'); $query = $this->db->get(); $rows = $query->result_array(); //Return result to jTable $jTableResult = array(); $jTableResult['Result'] = "OK"; $jTableResult['Records'] = $rows; return $jTableResult; } My controller page : function listRecord(){ $this->load->model('project_model'); $result = $this->project_model->list_all(); print json_encode($result); } And to finish my view page : <table id="listtable"></table> <script type="text/javascript"> $(document).ready(function () { $('#listtable').jtable({ title: 'Table test', actions: { listAction: '<?php echo base_url().'project/listRecord';?>', createAction: '/GettingStarted/CreatePerson', updateAction: '/GettingStarted/UpdatePerson', deleteAction: '/GettingStarted/DeletePerson' }, fields: { project_id: { key: true, list: false }, Project: { title: 'Project Name' }, Description: { title: 'Description' }, Status: { title: 'Status', width: '20px' }, Thumbnail: { title: 'Thumbnail', display: function (data) { return '<a href="<?php echo base_url('project');?>/' + data.record.project_id + '"><img class="thumbnail" width="50px" height="50px" src="' + data.record.Thumbnail + '" alt="' + data.record.Thumbnail + '" ></a>'; } }, Template: { title: 'Template' }, Asset: { title: 'Assets' }, RecordDate: { title: 'Record date', type: 'date', create: false, edit: false } } }); //Load person list from server $('#listtable').jtable('load'); }); </script> I read lot of posts talking about that, like replace ',' separator by ",", or use OUTER to the join, or group_by('p.project_id') before using get method, don't work. Here is a the output of the query in json : {"Result":"OK","Records":[{"project_id":"1","Project":"Adam & Eve : A Famous Story","Description":"The story about Adam & Eve reviewed in 3D Animation movie !","Status":"wip","Thumbnail":"http:\/\/localhost\/assets\/images\/thumb\/projectAdamAndEve.png","Template":"Animation Movie","assetslist":"Apple, Adam, Eve, Garden of Eden"}]} We can see the GROUP_CONCAT is here (after "assetslist"), but the column stills empty. If asked, I can post the database SQL file. Thank you.

    Read the article

  • How to convert a DataSet object into an ObjectContext (Entity Framework) object on the fly?

    - by Marcel
    Hi all, I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again. From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new() { return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name); } Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface. Does this make sense? Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so MyEntities context = new MyEntities(); //create new in-memory context ///.... //get the item in the navigations table MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario NavigationResult entity = new NavigationResult { Direction = dataRow.Direction, ///... NavigationResultID = dataRow.NavigationResultID }; //convert to entities context.AddToNavigationResult(entity); //add to entities ///.... A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model.... Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database. Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object? P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • Stored Procedure call with parameters in ASP.NET MVC

    - by cc0
    I have a working controller for another stored procedure in the database, but I am trying to test another. When I request the URL; http://host.com/Map?minLat=0&maxLat=50&minLng=0&maxLng=50 I get the following error message, which is understandable but I can't seem to find out why it occurs; Procedure or function 'esp_GetPlacesWithinGeoSpan' expects parameter '@MinLat', which was not supplied. This is the code I am using. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; using System.Data; using System.Text; using System.Data.SqlClient; namespace prototype.Controllers { public class MapController : Controller { //Initial variable definitions //Array with chars to be used with the Trim() methods char[] lastComma = { ',' }; //Minimum and maximum lat/longs for queries float _minLat; float _maxLat; float _minLng; float _maxLng; //Creates stringbuilder object to store SQL results StringBuilder json = new StringBuilder(); //Defines which SQL-server to connect to, which database, and which user SqlConnection con = new SqlConnection(...connection string here...); // // HTTP-GET: /Map/ public string CallProcedure_getPlaces(float minLat, float maxLat, float minLng, float maxLng) { con.Open(); using (SqlCommand cmd = new SqlCommand("esp_GetPlacesWithinGeoSpan", con)) { cmd.CommandType = CommandType.Text; cmd.Parameters.AddWithValue("@MinLat", _minLat); cmd.Parameters.AddWithValue("@MaxLat", _maxLat); cmd.Parameters.AddWithValue("@MinLng", _minLng); cmd.Parameters.AddWithValue("@MaxLng", _maxLng); using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { json.AppendFormat("\"{0}\":{{\"c\":{1},\"f\":{2}}},", reader["PlaceID"], reader["PlaceName"], reader["SquareID"]); } } con.Close(); } return "{" + json.ToString().TrimEnd(lastComma) + "}"; } //http://host.com/Map?minLat=0&maxLat=50&minLng=0&maxLng=50 public ActionResult Index(float minLat, float maxLat, float minLng, float maxLng) { _minLat = minLat; _maxLat = maxLat; _minLng = minLng; _maxLng = maxLng; return Content(CallProcedure_getPlaces(_minLat, _maxLat, _minLng, _maxLng)); } } } Any help on resolving this problem would be greatly appreciated.

    Read the article

  • Run MySQL INSERT Query multiple times (insert values into multiple tables)

    - by Derek
    Hi, basically, I have 3 tables; users and projects (which is a many-to-many relationship), then I have 'usersprojects' to allow the one-to-many formation. When a user adds a project, I need the project information stored and then the 'userid' and 'projectid' stored in the usersprojects table. It sounds like its really straight forward but I'm having problems with the syntax I think!? As it stands, I have this as my INSERT queries (values going into 2 different tables): $project_id = $_POST['project_id']; $projectname = $_POST['projectname']; $projectdeadline = $_POST['projectdeadline']; $projectdetails = $_POST['projectdetails']; $user_id = $_POST['user_id']; $sql = "INSERT INTO projects (projectid, projectname, projectdeadline, projectdetails) VALUES ('{$projectid}','{$projectname}','{$projectdeadline}','{$projectdetails}')"; $sql = "INSERT INTO usersprojects (userid, projectid) VALUES ('{$userid}','{$projectid}')"; None of the information is being stored in the projects table, but the user ID is being stored in the usersprojects table (but not project ID!?)... I did have it working where the project information is stored correctly with a project ID, before I added this bit: $sql = "INSERT INTO usersprojects (userid, projectid) VALUES ('{$userid}','{$projectid}')"; But before the code above was put in, obviously no info is being stored in usersprojects table. The source code that links the script: <form id="addform" name="addform" method="POST" action="addproject-run.php"> <label>Project Name:</label> <input name="projectname" size="40" id="projectname" value="<?php if (isset($_POST['projectname'])); ?>"/><br /> <input name="user_id" input type="hidden" size="40" id="user_id" value="<?php echo $_SESSION['SESS_USERID']; ?>"/> <label>Project Deadline:</label> <input name="projectdeadline" size="40" id="projectdeadline" value="In the format of 'YYYY-MM-DD'<?php if (isset($_POST['projectdeadline'])); ?>"/><br /> <label>Project Details:</label> <textarea rows="5" cols="20" name="projectdetails" id="projectdetails"><?php if (isset($_POST['projectdetails'])); ?></textarea> <br /> <br /> <input value="Create Project" class="addbtn" type="submit" /> </form></div> So I think I'm right in saying I have the syntax for the SQL statement to be run an insert query of values into 2 tables? Any help is much appreciated! Thanks.

    Read the article

  • Entity Framework won't SaveChanges on new entity with two-level relationship

    - by Tim Rourke
    I'm building an ASP.NET MVC site using the ADO.NET Entity Framework. I have an entity model that includes these entities, associated by foreign keys: Report(ID, Date, Heading, Report_Type_ID, etc.) SubReport(ID, ReportText, etc.) - one-to-one relationship with Report. ReportSource(ID, Name, Description) - one-to-many relationship with Sub_Report. ReportSourceType(ID, Name, Description) - one-to-many relationship with ReportSource. Contact (ID, Name, Address, etc.) - one-to-one relationship with Report_Source. There is a Create.aspx page for each type of SubReport. The post event method returns a new Sub_Report entity. Before, in my post method, I followed this process: Set the properties for a new Report entity from the page's fields. Set the SubReport entity's specific properties from the page's fields. Set the SubReport entity's Report to the new Report entity created in 1. Given an ID provided by the page, look up the ReportSource and set the Sub_Report entity's ReportSource to the found entity. SaveChanges. This workflow succeeded just fine for a couple of weeks. Then last week something changed and it doesn't work any more. Now instead of the save operation, I get this Exception: UpdateException: "Entities in 'DIR2_5Entities.ReportSourceSet' participate in the 'FK_ReportSources_ReportSourceTypes' relationship. 0 related 'ReportSourceTypes' were found. 1 'Report_Source_Types' is expected." The debug visualizer shows the following: The SubReport's ReportSource is set and loaded, and all of its properties are correct. The Report_Source has a valid ReportSourceType entity attached. In SQL Profiler the prepared SQL statement looks OK. Can anybody point me to what obvious thing I'm missing? TIA Notes: The Report and SubReport are always new entities in this case. The Report entity contains properties common to many types of reports and is used for generic queries. SubReports are specific reports with extra parameters varying by type. There is actually a different entity set for each type of SubReport, but this question applies to all of them, so I use SubReport as a simplified example.

    Read the article

  • Core Data NSPredicate for relationships.

    - by Mugunth Kumar
    My object graph is simple. I've a feedentry object that stores info about RSS feeds and a relationship called Tag that links to "TagValues" object. Both the relation (to and inverse) are to-many. i.e, a feed can have multiple tags and a tag can be associated to multiple feeds. I referred to http://stackoverflow.com/questions/844162/how-to-do-core-data-queries-through-a-relationship and created a NSFetchRequest. But when fetch data, I get an exception stating, NSInvalidArgumentException unimplemented SQL generation for predicate What should I do? I'm a newbie to core data :( I know I've done something terribly wrong... Please help... Thanks -- NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"FeedEntry" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"authorname" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSEntityDescription *tagEntity = [NSEntityDescription entityForName:@"TagValues" inManagedObjectContext:self.managedObjectContext]; NSPredicate *tagPredicate = [NSPredicate predicateWithFormat:@"tagName LIKE[c] 'nyt'"]; NSFetchRequest *tagRequest = [[NSFetchRequest alloc] init]; [tagRequest setEntity:tagEntity]; [tagRequest setPredicate:tagPredicate]; NSError *error = nil; NSArray* predicates = [self.managedObjectContext executeFetchRequest:tagRequest error:&error]; TagValues *tv = (TagValues*) [predicates objectAtIndex:0]; NSLog(tv.tagName); // it is nyt here... NSPredicate *predicate = [NSPredicate predicateWithFormat:@"tag IN %@", predicates]; [fetchRequest setPredicate:predicate]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; --

    Read the article

  • Linq to List and IEnumerable issues

    - by Otaku
    I am querying an HTML file with Linq. It looks something like this: <html> <body> <div class="Players"> <div class="role">Goalies</div> <div class="name">John Smith</div> <div class="name">Shawn Xie</div> <div class="role">Right Wings</div> <div class="name">Jack Davis</div> <div class="name">Carl Yuns</div> <div class="name">Wayne Gortonia</div> <div class="role">Centers</div> <div class="name">Lutz Gaspy</div> <div class="name">John Jacobs</div> </div </html> </body> What I'm trying to do is create a list of these folks like in a list of a structure called Players: Structure Players Public Name As String Public Position As String End Structure But I've quickly found out I don't really know what I'm doing when it comes to Linq. I've got this far my my queries: Dim goalieList = From d In player.Elements _ Where d.Value = "Goalies" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Goalie", _ .Name = g.Value} Dim centersList = From d In player.Elements _ Where d.Value = "Centers" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Centers", _ .Name = g.Value} Which gets me down to the the players by position, but then I can't do much with this afterwards the result type is System.Collections.Generic.IEnumerable(Of System.Collections.Generic.IEnumerable(Of Player)) What I want to do is add these two results to a new list, like: Dim playersList As List(Of Players) = Nothing playersList.AddRange(centersList) playersList.AddRange(goalieList) So that I can then query the list and use it. But it kicks the error: Unable to cast object of type 'WhereSelectEnumerableIterator2[System.Xml.Linq.XElement,System.Collections.Generic.IEnumerable1[Players]]' to type 'System.Collections.Generic.IEnumerable`1[Players]' As you can see, I may really have no idea how to work with all these objects/classes. Does anyone have any insight on what I may be doing wrong and how I can resolve it? RESOLVED: The Linq query needs to return a single iEnumerable, like this: Dim goalieList = From l In _ (From d In players.Elements _ Where d.Value = "Goalies" _ Select d.ElementsAfterSelf.TakeWhile(Function(f) f.@class <> "role")) _ Select New Players With {.Position = "Goalie", .Name = l.Value} and then use goalieList.ToList

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

  • mysql never releases memory

    - by Ishu
    I have a production server clocking about 4 million page views per month. The server has got 8GB of RAM and mysql acts as a database. I am facing problems in handling mysql to take this load. I need to restart mysql twice a day to handle this thing. The problem with mysql is that it starts with some particular occupation, the memory consumed by mysql keeps on increasing untill it reaches the maximum it can consume and then mysql stops responding slowly or does not respond at all, which freezes the server. All my tables are indexed properly and there are no long queries. I need some one to help on how to go about debugging what to do here. All my tables are myisam. I have tried configuring the parameters key_buffer etc but to no rescue. Any sort of help is greatly appreciated. Here are some parameters which may help. mysql --version mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 mysql> show variables; +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | NO | | have_bdb | YES | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | NO | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | | init_connect | | | init_file | | | init_slave | | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 2621440000 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | ON | | log_warnings | 1 | | long_query_time | 8 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 8388608 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 400 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 4294967295 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 16777216 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2000 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/run/mysqld/mysqld.pid | | plugin_dir | | | port | 3306 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 134217728 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 2097152 | | read_only | OFF | | read_rnd_buffer_size | 8388608 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 1 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /var/lib/mysql/mysql.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 8 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77-log | | version_bdb | Sleepycat Software: Berkeley DB 4.1.24: (January 29, 2009) | | version_comment | Source distribution | | version_compile_machine | i686 | | version_compile_os | redhat-linux-gnu | | wait_timeout | 28800 | +---------------------------------+------------------------------------------------------------+

    Read the article

  • T-SQL selecting values that match ISNUMERIC and also are within a specified range. (plus Linq-to-sql

    - by Toby
    I am trying to select rows from a table where one of the (NVARCHAR) columns is within a numeric range. SELECT ID, Value FROM Data WHERE ISNUMERIC(Value) = 1 AND CONVERT(FLOAT, Value) < 66.6 Unfortunately as part of the SQL spec the AND clauses don't have to short circuit (and don't on MSSQL Server EE 2008). More info: http://stackoverflow.com/questions/789231/is-the-sql-where-clause-short-circuit-evaluated My next attempt was to try this to see if I could achieve delayed evaluation of the CONVERT SELECT ID, Value FROM Data WHERE (CASE WHEN ISNUMERIC(Value) = 1 THEN CONVERT(FLOAT, Value) < 66.6 ELSE 0 END) but I cannot seem to use a < (or any comparison) with the result of a CONVERT. It fails with the error Incorrect syntax near '<'. I can get away with SELECT ID, CONVERT(FLOAT, Value) AS Value FROM Data WHERE ISNUMERIC(Value) = 1 So the obvious solution is to wrap the whole select statement in another SELECT and WHERE and return the converted values from the inner select and filter in there where of the outer select. Unfortunately this is where my Linq-to-sql problem comes in. I am filtering not only by one range but potentialy by many, or just by the existance of the record (there are some date range selects and comparisons I've left out.) Essentially I would like to be able to generate something like this: SELECT ID, TypeID, Value FROM Data WHERE (TypeID = 4 AND ISNUMERIC(Value) AND CONVERT(Float, Value) < 66.6) OR (TypeID = 8 AND ISNUMERIC(Value) AND CONVERT(Float, Value) > 99) OR (TypeID = 9) (With some other clauses in each of those where options.) This clearly doesn't work if I filter out the non-ISNUMERIC values in an inner select. As I mentioned I am using Linq-to-sql (and PredicateBulider) to build up these queries but unfortunately Datas.Where(x => ISNUMERIC(x.Value) ? Convert.ToDouble(x.Value) < 66.6 : false) Gets converted to this which fails the initial problem. WHERE (ISNUMERIC([t0].[Value]) = 1) AND ((CONVERT(Float,[t0].[Value])) < @p0) My last resort will have to be to outer join against a double select on the same table for each of the comparisons but this isn't really an idea solution. I was wondering if anyone has run into similar issues before?

    Read the article

  • Linq To Sql Concat() dropping fields in created TSQL

    - by user191468
    This is strange. I am moving a stored proc to a service. The TSQL unions multiple selects. To replicate this I created multiple queries resulting in a common new concrete type. Then I issue a return result.ToString(); and the resulting SQL selects have varying numbers of columns specified thus causing an MSSQL Msg 205... using (var db = GetDb()) { var fundInv = from f in db.funds select new Investments { Company = f.company, FullName = f.fullname, Admin = f.admin, Fund = f.fund1, FundCode = f.fundcode, Source = STR_FUNDS, IsPortfolio = false, IsActive = f.active, Strategy = f.strategy, SubStrategy = f.substrategy, AltStrategy = f.altstrategy, AltSubStrategy = f.altsubstrategy, Region = f.region, AltRegion = f.altregion, UseAlternate = f.usealt, ClassesAllowed = f.classallowed }; var stocksInv = from s in db.stocks where !fundInv.Select(f => f.Company).Contains(s.vehcode) select new Investments { Company = s.company, FullName = s.issuer, Admin = STR_PRS, Fund = s.shortname, FundCode = s.vehcode, Source = STR_STOCK, IsPortfolio = false, IsActive = (s.inactive == null), Strategy = s.style, SubStrategy = s.substyle, AltStrategy = s.altstyle, AltSubStrategy = s.altsubsty, Region = s.geography, AltRegion = s.altgeo, UseAlternate = s.usealt, ClassesAllowed = STR_GENERIC }; var bondsInv = from oi in db.bonds where !fundInv.Select(f => f.Company).Contains(oi.vehcode) select new Investments { Company = string.Empty, FullName = oi.issue, Admin = STR_PRS1, Fund = oi.issue, FundCode = oi.vehcode, Source = STR_BONDS, IsPortfolio = false, IsActive = oi.closed, Strategy = STR_OTH, SubStrategy = STR_OTH, AltStrategy = STR_OTH, AltSubStrategy = STR_OTH, Region = STR_OTH, AltRegion = STR_OTH, UseAlternate = false, ClassesAllowed = STR_GENERIC }; return (fundInv.Concat(stocksInv).Concat(bondsInv)).ToList(); } The code above results in a complex select statement where each "table" above has different column count. (see SQL below) I've been trying a few things but no change yet. Ideas are welcome. SELECT [t6].[company] AS [Company], [t6].[fullname] AS [FullName], [t6].[admin] AS [Admin], [t6].[fund] AS [Fund], [t6].[fundcode] AS [FundCode], [t6].[value] AS [Source], [t6].[value2] AS [IsPortfolio], [t6].[active] AS [IsActive], [t6].[strategy] AS [Strategy], [t6].[substrategy] AS [SubStrategy], [t6].[altstrategy] AS [AltStrategy], [t6].[altsubstrategy] AS [AltSubStrategy], [t6].[region] AS [Region], [t6].[altregion] AS [AltRegion], [t6].[usealt] AS [UseAlternate], [t6].[classallowed] AS [ClassesAllowed] FROM ( SELECT [t3].[company], [t3].[fullname], [t3].[admin], [t3].[fund], [t3].[fundcode], [t3].[value], [t3].[value2], [t3].[active], [t3].[strategy], [t3].[substrategy], [t3].[altstrategy], [t3].[altsubstrategy], [t3].[region], [t3].[altregion], [t3].[usealt], [t3].[classallowed] FROM ( SELECT [t0].[company], [t0].[fullname], [t0].[admin], [t0].[fund], [t0].[fundcode], @p0 AS [value], [t0].[active], [t0].[strategy], [t0].[substrategy], [t0].[altstrategy], [t0].[altsubstrategy], [t0].[region], [t0].[altregion], [t0].[usealt], [t0].[classallowed] FROM [zInvest].[funds] AS [t0] UNION ALL SELECT [t1].[company], [t1].[issuer], @p6 AS [value], [t1].[shortname], [t1].[vehcode], @p7 AS [value2], @p8 AS [value3], (CASE WHEN [t1].[inactive] IS NULL THEN 1 ELSE 0 END) AS [value5], [t1].[style], [t1].[substyle], [t1].[altstyle], [t1].[altsubsty], [t1].[geography], [t1].[altgeo], [t1].[usealt], @p10 AS [value6] FROM [zBank].[stocks] AS [t1] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t2] WHERE [t2].[company] = [t1].[vehcode] ))) AND ([t1].[vehcode] <> @p2) AND (SUBSTRING([t1].[vehcode], @p3 + 1, @p4) <> @p5) ) AS [t3] UNION ALL SELECT @p11 AS [value], [t4].[issue], @p12 AS [value2], [t4].[vehcode], @p13 AS [value3], @p14 AS [value4], [t4].[closed], @p16 AS [value6], @p17 AS [value7] FROM [zMut].[bonds] AS [t4] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t5] WHERE [t5].[company] = [t4].[vehcode] )) ) AS [t6]

    Read the article

  • Dynamically select field names in a query with Spring JDBCTemplate

    - by Francesco
    Hi, I have a problem with parameters replacing by Spring JdbcTemplate. I have this query : <bean id="fixQuery" class="java.lang.String"> <constructor-arg type="java.lang.String" value="select fa.id, fi.? from fix_ambulation fa left join fix_i18n fi on fa.translation_id = fi.id order by name" /> And this method : public List<FixAmbulation> readFixAmbulation(String locale) throws Exception { List<FixAmbulation> ambulations = this.getJdbcTemplate().query( fixQuery, new Object[] {locale.toLowerCase()}, ParameterizedBeanPropertyRowMapper .newInstance(FixAmbulation.class)); return ambulations; } And I'd like to have the ? filled with the string representing the locale the user is using. So if the user is brasilian I'd send him the column pt_br from the table fix_i18n, otherwise if he's american I'd send him the column en_us. What I get from this method is a PostgreSQL exception org.postgresql.util.PSQLException: ERROR: syntax error at or near "$1" If I replace fi.? with just ? (the column name of the locale is unique, so if I run this query in the database it works just fine) what I get is that every object returned from method has the string locale into the field name. I.e. in name field I have "en_us". The only way to have it working I found was to change the method into : public List<FixAmbulation> readFixAmbulation(String locale) throws Exception { String query = "select fa.id, fi." + locale.toLowerCase() + " as name " + fixQuery; this.log.info("QUERY : " + query); List<FixAmbulation> ambulations = this.getJdbcTemplate().query( query, ParameterizedBeanPropertyRowMapper .newInstance(FixAmbulation.class)); return ambulations; } and setting fixQuery to : <bean id="fixQuery" class="java.lang.String"> <constructor-arg type="java.lang.String" value=" from telemedicina.fix_ambulation fa left join telemedicina.fix_i18n fi on fa.translation_id = fi.id order by name" /> </bean> My DAO extends Spring JdbcDaoSupport and works just fine for all other queries. What am I doing wrong?

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • How does mysql define DISTINCT() in reference documentation

    - by goran
    EDIT: This question is about finding definitive reference to MySQL syntax on SELECT modifying keywords and functions. /EDIT AFAIK SQL defines two uses of DISTINCT keywords - SELECT DISTINCT field... and SELECT COUNT(DISTINCT field) ... However in one of web applications that I administer I've noticed performance issues on queries like SELECT DISTINCT(field1), field2, field3 ... DISTINCT() on a single column makes no sense and I am almost sure it is interpreted as SELECT DISTINCT field1, field2, field3 ... but how can I prove this? I've searched mysql site for a reference on this particular syntax, but could not find any. Does anyone have a link to definition of DISTINCT() in mysql or knows about other authoritative source on this? Best EDIT After asking the same question on mysql forums I learned that while parsing the SQL mysql does not care about whitespace between functions and column names (but I am still missing a reference). As it seems you can have whitespace between functions and the parenthesis SELECT LEFT (field1,1), field2... and get mysql to understand it as SELECT LEFT(field,1) Similarly SELECT DISTINCT(field1), field2... seems to get decomposed to SELECT DISTINCT (field1), field2... and then DISTINCT is taken not as some undefined (or undocumented) function, but as SELECT modifying keyword and the parenthesis around field1 are evaluated as if they were part of field expression. It would be great if someone would have a pointer to documentation where it is stated that the whitespace between functions and parenthesis is not significant or to provide links to apropriate MySQL forums, mailing lists where I could raise a question to put this into reference. EDIT I have found a reference to server option IGNORE SPACE. It states that "The IGNORE SPACE SQL mode can be used to modify how the parser treats function names that are whitespace-sensitive", later on it states that recent versions of mysql have reduced this number from 200 to 30. One of the remaining 30 is COUNT for example. With IGNORE SPACE enabled both SELECT COUNT(*) FROM mytable; SELECT COUNT (*) FROM mytable; are legal. So if this is an exception, I am left to conclude that normally functions ignore space by default. If functions ignore space by default then if the context is ambiguous, such as for the first function on a first item of the select expression, then they are not distinguishable from keywords and the error can not be thrown and MySQL must accept them as keywords. Still, my conclusions feel like they have lot of assumptions, I would still be grateful and accept any pointers to see where to follow up on this.

    Read the article

  • While loops within while loops and output php?

    - by NovacTownCode
    I have a while loop to show the replies for a post on my website. The value for parentID used in the query is $post['postID'] which is an array of details for the post being viewed. As seen below it outputs the following (each subject is a link to view the full post) $q = $dbc -> prepare("SELECT * FROM boardposts WHERE parentID = ?"); $q -> execute(array($post['postID'])); while ($postReply = $q -> fetch(PDO::FETCH_ASSOC)) { echo '<p><a href="http://www.example.com/boards?topic=' . $_GET['topic'] . '&amp;view=' . $postReply['postID'] . '">' . $postReply['subject'] . '</a>'; } This currently outputs something along the lines of, Replies To This Message: subject 1 subject 2 subject 3 subject 4 Is there a way in which I can also in the list include replies to the replies, something along the lines of, Replies To This Message: subject 1          subject 1 reply          subject 1 reply                  subject 1 reply reply subject 2 subject 3          subject 3 reply          subject 3 reply                  subject 3 reply reply subject 4          subject 4 reply subject 5 subject 6          subject 6 reply                  subject 4 reply reply I understand all the indenting can be with css, but am stuck as to how to pull the data from the mysql database and in the correct order, I tried while loops within while loops, but that involved queries inside while loops, which is bad! Thanks for your input!

    Read the article

  • How to use PredicateBuilder with nested OR conditionals in Linq

    - by tblank
    I've been very happily using PredicateBuilder but until now have only used it for queries with only either concatenated AND statements or OR statements. Now for the first time I need a pair of OR statements nested along with a some AND statements like this: select x from Table1 where a = 1 AND b = 2 AND (z = 1 OR y = 2) Using the documentation from Albahari, I've constructed my expression like this: Expression<Func<TdIncSearchVw, bool>> predicate = PredicateBuilder.True<TdIncSearchVw>(); // for AND Expression<Func<TdIncSearchVw, bool>> innerOrPredicate = PredicateBuilder.False<TdIncSearchVw>(); // for OR innerOrPredicate = innerOrPredicate.Or(i=> i.IncStatusInd.Equals(incStatus)); innerOrPredicate = innerOrPredicate.Or(i=> i.RqmtStatusInd.Equals(incStatus)); predicate = predicate.And(i => i.TmTec.Equals(tecTm)); predicate = predicate.And(i => i.TmsTec.Equals(series)); predicate = predicate.And(i => i.HistoryInd.Equals(historyInd)); predicate.And(innerOrPredicate); var query = repo.GetEnumerable(predicate); This results in SQL that completely ignores the 2 OR phrases. select x from TdIncSearchVw where ((this_."TM_TEC" = :p0 and this_."TMS_TEC" = :p1) and this_."HISTORY_IND" = :p2) If I try using just the OR phrases like: Expression<Func<TdIncSearchVw, bool>> innerOrPredicate = PredicateBuilder.False<TdIncSearchVw>(); // for OR innerOrPredicate = innerOrPredicate.Or(i=> i.IncStatusInd.Equals(incStatus)); innerOrPredicate = innerOrPredicate.Or(i=> i.RqmtStatusInd.Equals(incStatus)); var query = repo.GetEnumerable(innerOrPredicate); I get SQL as expected like: select X from TdIncSearchVw where (IncStatusInd = incStatus OR RqmtStatusInd = incStatus) If I try using just the AND phrases like: predicate = predicate.And(i => i.TmTec.Equals(tecTm)); predicate = predicate.And(i => i.TmsTec.Equals(series)); predicate = predicate.And(i => i.HistoryInd.Equals(historyInd)); var query = repo.GetEnumerable(predicate); I get SQL like: select x from TdIncSearchVw where ((this_."TM_TEC" = :p0 and this_."TMS_TEC" = :p1) and this_."HISTORY_IND" = :p2) which is exactly the same as the first query. It seems like I'm so close it must be something simple that I'm missing. Can anyone see what I'm doing wrong here? Thanks, Terry

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Linq-to-sql Compiled Query returning object NOT belonging to submitted DataContext

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Revised: It looks like compiled query is returning result from new data context and it is not using the one that I passed in compiled-query. Therefore I can not reuse returned object and link it to another object obtained from datacontext thru non compiled queries. [TestMethod] public void GetMachinesTest() { // Test Preparation (not important) using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // PASS !!!! } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // FAIL !!!! } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext.

    Read the article

  • Telnet connection using c#

    - by alejandrobog
    Our office currently uses telnet to query an external server. The procedure is something like this. Connect - telnet opent 128........ 25000 Query - we paste the query and then hit alt + 019 Response - We receive the response as text in the telnet window So I’m trying to make this queries automatic using a c# app. My code is the following First the connection. (No exceptions) SocketClient = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); String szIPSelected = txtIPAddress.Text; String szPort = txtPort.Text; int alPort = System.Convert.ToInt16(szPort, 10); System.Net.IPAddress remoteIPAddress = System.Net.IPAddress.Parse(szIPSelected); System.Net.IPEndPoint remoteEndPoint = new System.Net.IPEndPoint(remoteIPAddress, alPort); SocketClient.Connect(remoteEndPoint); Then I send the query (No exceptions) string data ="some query"; byte[] byData = System.Text.Encoding.ASCII.GetBytes(data); SocketClient.Send(byData); Then I try to receive the response byte[] buffer = new byte[10]; Receive(SocketClient, buffer, 0, buffer.Length, 10000); string str = Encoding.ASCII.GetString(buffer, 0, buffer.Length); txtDataRx.Text = str; public static void Receive(Socket socket, byte[] buffer, int offset, int size, int timeout) { int startTickCount = Environment.TickCount; int received = 0; // how many bytes is already received do { if (Environment.TickCount > startTickCount + timeout) throw new Exception("Timeout."); try { received += socket.Receive(buffer, offset + received, size - received, SocketFlags.None); } catch (SocketException ex) { if (ex.SocketErrorCode == SocketError.WouldBlock || ex.SocketErrorCode == SocketError.IOPending || ex.SocketErrorCode == SocketError.NoBufferSpaceAvailable) { // socket buffer is probably empty, wait and try again Thread.Sleep(30); } else throw ex; // any serious error occurr } } while (received < size); } Every time I try to receive the response I get "an exsiting connetion has forcibly closed by the remote host" if open telnet and send the same query I get a response right away Any ideas, or suggestions?

    Read the article

  • Loading XML from Web Service

    - by Lukasz
    I am connecting to a web service to get some data back out as xml. The connection works fine and it returns the xml data from the service. var remoteURL = EveApiUrl; var postData = string.Format("userID={0}&apikey={1}&characterID={2}", UserId, ApiKey, CharacterId); var request = (HttpWebRequest)WebRequest.Create(remoteURL); request.Method = "POST"; request.ContentLength = postData.Length; request.ContentType = "application/x-www-form-urlencoded"; // Setup a stream to write the HTTP "POST" data var WebEncoding = new ASCIIEncoding(); var byte1 = WebEncoding.GetBytes(postData); var newStream = request.GetRequestStream(); newStream.Write(byte1, 0, byte1.Length); newStream.Close(); var response = (HttpWebResponse)request.GetResponse(); var receiveStream = response.GetResponseStream(); var readStream = new StreamReader(receiveStream, Encoding.UTF8); var webdata = readStream.ReadToEnd(); Console.WriteLine(webdata); This prints out the xml that comes from the service. I can also save the xml as an xml file like so; TextWriter writer = new StreamWriter(@"C:\Projects\TrainingSkills.xml"); writer.WriteLine(webdata); writer.Close(); Now I can load the file as an XDocument to perform queries on it like this; var data = XDocument.Load(@"C:\Projects\TrainingSkills.xml"); What my problem is that I don't want to save the file and then load it back again. When I try to load directly from the stream I get an exception, Illegal characters in path. I don't know what is going on, if I can load the same xml as a text file why can't I load it as a stream. The xml is like this; <?xml version='1.0' encoding='UTF-8'?> <eveapi version="2"> <currentTime>2010-04-28 17:58:27</currentTime> <result> <currentTQTime offset="1">2010-04-28 17:58:28</currentTQTime> <trainingEndTime>2010-04-29 02:48:59</trainingEndTime> <trainingStartTime>2010-04-28 00:56:42</trainingStartTime> <trainingTypeID>3386</trainingTypeID> <trainingStartSP>8000</trainingStartSP> <trainingDestinationSP>45255</trainingDestinationSP> <trainingToLevel>4</trainingToLevel> <skillInTraining>1</skillInTraining> </result> <cachedUntil>2010-04-28 18:58:27</cachedUntil> </eveapi> Thanks for your help!

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >