Search Results

Search found 17227 results on 690 pages for 'oracle hcm cloud'.

Page 681/690 | < Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • Build path issues learning Guice

    - by Preston
    I can't figure out why I'm getting this error below I have included all the appropriate jars as far as I can tell(I have included eclipses .classpath file below.) All of the classpath entries resolve just fine. What am I missing? The type javax.servlet.ServletContextListener cannot be resolved. It is indirectly referenced from required .class files on the "extends GuiceServletContextListener" line - import com.google.inject.Guice; import com.google.inject.Injector; import com.google.inject.servlet.GuiceServletContextListener; import com.google.inject.servlet.ServletModule; public class ServletConfig extends GuiceServletContextListener { @Override protected Injector getInjector() { return Guice.createInjector(new ServletModule(){ @Override protected void configureServlets() { // TODO: add necessary code to bind } }); } } .Classpath <?xml version="1.0" encoding="UTF-8"?> <classpath> <classpathentry kind="src" path="src"/> <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.7.0_21"> <attributes> <attribute name="owner.project.facets" value="java"/> </attributes> </classpathentry> <classpathentry kind="con" path="oracle.eclipse.tools.glassfish.lib.system"> <attributes> <attribute name="owner.project.facets" value="jst.web"/> </attributes> </classpathentry> <classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.web.container"/> <classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.module.container"/> <classpathentry kind="lib" path="guice-3.0/aopalliance.jar"/> <classpathentry kind="lib" path="guice-3.0/guice-3.0.jar"/> <classpathentry kind="lib" path="guice-3.0/guice-servlet-3.0.jar"/> <classpathentry kind="lib" path="guice-3.0/javax.inject.jar"/> <classpathentry kind="output" path="build/classes"/> </classpath>

    Read the article

  • How do you stop scripters from slamming your website hundreds of times a second?

    - by davebug
    [update] I've accepted an answer, as lc deserves the bounty due to the well thought-out answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible for us to actually track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey bit of hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHAing everyone is an alternative. I'll attempt to do a more full explanation in here later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki). I've added bounty to this question and attempted to explain why the current answers don't fit our needs. First, though, thanks to all of you who have thought about this, it's amazing to have this collective intelligence to help work through seemingly impossible problems. I'll be a little more clear than I was before: This is about the bag o' crap sales on woot.com. I'm the president of Woot Workshop, the subsidiary of Woot that does the design, writes the product descriptions, podcasts, blog posts, and moderates the forums. I work in the css/html world and am only barely familiar with the rest of the developer world. I work closely with the developers and have talked through all of the answers here (and many other ideas we've had). Usability of the site is a massive part of my job, and making the site exciting and fun is most of the rest of it. That's where the three goals below derive. CAPTCHA harms usability, and bots steal the fun and excitement out of our crap sales. To set up the scenario a little more, bots are slamming our front page tens of times a second screenscraping (and/or scanning our rss) for the Random Crap sale. The moment they see that, it triggers a second stage of the program that logs in, clicks I want One, fills out the form, and buys the crap. In current (2/6/2009) order of votes: lc: On stackoverflow and other sites that use this method, they're almost always dealing with authenticated (logged in) users, because the task being attempted requires that. On Woot, anonymous (non-logged) users can view our home page. In other words, the slamming bots can be non-authenticated (and essentially non-trackable except by IP address). So we're back to scanning for IPs, which a) is fairly useless in this age of cloud networking and spambot zombies and b) catches too many innocents given the number of businesses that come from one IP address (not to mention the issues with non-static IP ISPs and potential performance hits to trying to track this). Oh, and having people call us would be the worst possible scenario. Can we have them call you? BradC Ned Batchelder's methods look pretty cool, but they're pretty firmly designed to defeat bots built for a network of sites. Our problem is bots are built specifically to defeat our site. Some of these methods could likely work for a short time until the scripters evolved their bots to ignore the honeypot, screenscrape for nearby label names instead of form ids, and use a javascript-capable browser control. lc again "Unless, of course, the hype is part of you

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • SSH not working over IPSec tunnel (Strongswan)

    - by PattPatel
    I configured a small network on a cloud virtual machine. This virtual machine has a static IP address assigned to eth0 interface that I'll call $EXTIP. mydomain.com points to $EXTIP. Inside, I have some linux containers, that get their ip through DHCP in the Subnet 10.0.0.0/24 (i called the virtual interface nat ). They run some services that can be reached through DNAT. Then I wanted to connect to these containers through an IPSec tunnel, so I configured StrongSwan. ipsec.conf: conn %default dpdaction=none rekey=no conn remote keyexchange=ikev2 ike=######## left=[$EXTIP] leftsubnet=10.0.1.0/24,10.0.0.0/24 leftauth=pubkey lefthostaccess=yes leftcert=########.pem leftfirewall=yes leftid="#########" right=%any rightsourceip=10.0.1.0/24 rightauth=######## rightid=%any rightsendcert=never eap_identity=%any auto=add type=tunnel Everything works fine, IPSec clients get IPs of the 10.0.1.0/24 subnet and can reach the containers subnet. My problem is that I'm not able to get SSH connections over the tunnel. It simply does not work, ssh client does not produce any output. Sniffing with tcpdump gives: tcpdump: 09:50:29.648206 ARP, Request who-has 10.0.0.1 tell mydomain.com, length 28 09:50:29.648246 ARP, Reply 10.0.0.1 is-at 00:ff:aa:00:00:01 (oui Unknown), length 28 09:50:29.648253 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [S], seq 4007849772, win 29200, options [mss 1460,sackOK,TS val 1151153 ecr 0,nop,wscale 7], length 0 09:50:29.648296 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [S.], seq 2809522632, ack 4007849773, win 14480, options [mss 1460,sackOK,TS val 11482992 ecr 1151153,nop,wscale 6], length 0 09:50:29.677225 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 2809522633, win 229, options [nop,nop,TS val 1151162 ecr 11482992], length 0 09:50:29.679370 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [P.], seq 0:23, ack 1, win 229, options [nop,nop,TS val 1151162 ecr 11482992], length 23 09:50:29.679403 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], ack 24, win 227, options [nop,nop,TS val 11483002 ecr 1151162], length 0 09:50:29.684337 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1:32, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 31 09:50:29.685471 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 32:1480, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 1448 09:50:29.685519 IP mydomain.com > 10.0.0.1: ICMP mydomain.com unreachable - need to frag (mtu 1422), length 556 09:50:29.685567 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 32:1402, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 1370 09:50:29.685572 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 1402:1480, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 78 09:50:29.714601 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 32, win 229, options [nop,nop,TS val 1151173 ecr 11483003], length 0 09:50:29.714642 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1480:1600, ack 24, win 227, options [nop,nop,TS val 11483012 ecr 1151173], length 120 09:50:29.723649 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [P.], seq 1393:1959, ack 32, win 229, options [nop,nop,TS val 1151174 ecr 11483003], length 566 09:50:29.723677 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], ack 24, win 227, options [nop,nop,TS val 11483015 ecr 1151173,nop,nop,sack 1 {1394:1960}], length 0 09:50:29.725688 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 1480, win 251, options [nop,nop,TS val 1151177 ecr 11483003], length 0 09:50:29.952394 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1480:1600, ack 24, win 227, options [nop,nop,TS val 11483084 ecr 1151173,nop,nop,sack 1 {1394:1960}], length 120 09:50:29.981056 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 1600, win 251, options [nop,nop,TS val 1151253 ecr 11483084,nop,nop,sack 1 {1480:1600}], length 0 If you need it this is my iptables configuration file: iptables: *filter :INPUT ACCEPT [144:9669] :FORWARD DROP [0:0] :OUTPUT ACCEPT [97:15649] :interfacce-trusted - [0:0] :porte-trusted - [0:0] -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -j interfacce-trusted -A FORWARD -j porte-trusted -A FORWARD -j REJECT --reject-with icmp-host-unreachable -A FORWARD -d 10.0.0.1/32 -p tcp -m tcp --dport 80 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 10.0.0.1/32 -p tcp -m tcp --dport 443 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 10.0.0.3/32 -p tcp -m tcp --dport 1234 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A interfacce-trusted -i nat -j ACCEPT -A porte-trusted -d 10.0.0.1/32 -p tcp -m tcp --dport 80 -j ACCEPT -A porte-trusted -d 10.0.0.1/32 -p tcp -m tcp --dport 443 -j ACCEPT -A porte-trusted -d 10.0.0.3/32 -p tcp -m tcp --dport 1234 -j ACCEPT COMMIT *nat :PREROUTING ACCEPT [10:600] :INPUT ACCEPT [10:600] :OUTPUT ACCEPT [4:268] :POSTROUTING ACCEPT [18:1108] -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80 -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.0.0.1:443 -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 8069 -j DNAT --to-destination 10.0.0.3:1234 -A POSTROUTING -s 10.0.0.0/24 -o eth0 -m policy --dir out --pol ipsec -j ACCEPT -A POSTROUTING -s 10.0.1.0/24 -o nat -j MASQUERADE -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE COMMIT Probably I'm missing something stupid... Thanks in advance for helping :))

    Read the article

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • JPA Entity Manager resource handling

    - by chiragshahkapadia
    Every time I call JPA method its creating entity and binding query. My persistence properties are: <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider"/> <property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.use_query_cache" value="true"/> And I am creating entity manager the way shown below: emf = Persistence.createEntityManagerFactory("pu"); em = emf.createEntityManager(); em = Persistence.createEntityManagerFactory("pu").createEntityManager(); Is there any nice way to manage entity manager resource instead create new every time or any property can set in persistence. Remember it's JPA. See below binding log every time : 15:35:15,527 INFO [AnnotationBinder] Binding entity from annotated class: * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [EntityBinder] Bind entity com.* on table * 15:35:15,542 INFO [HibernateSearchEventListenerRegister] Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 15:35:15,542 INFO [NamingHelper] JNDI InitialContext properties:{} 15:35:15,542 INFO [DatasourceConnectionProvider] Using datasource: 15:35:15,542 INFO [SettingsFactory] RDBMS: and Real Application Testing options 15:35:15,542 INFO [SettingsFactory] JDBC driver: Oracle JDBC driver, version: 9.2.0.1.0 15:35:15,542 INFO [Dialect] Using dialect: org.hibernate.dialect.Oracle10gDialect 15:35:15,542 INFO [TransactionFactoryFactory] Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 15:35:15,542 INFO [TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recomm ended) 15:35:15,542 INFO [SettingsFactory] Automatic flush during beforeCompletion(): disabled 15:35:15,542 INFO [SettingsFactory] Automatic session close at end of transaction: disabled 15:35:15,542 INFO [SettingsFactory] JDBC batch size: 15 15:35:15,542 INFO [SettingsFactory] JDBC batch updates for versioned data: disabled 15:35:15,542 INFO [SettingsFactory] Scrollable result sets: enabled 15:35:15,542 INFO [SettingsFactory] JDBC3 getGeneratedKeys(): disabled 15:35:15,542 INFO [SettingsFactory] Connection release mode: auto 15:35:15,542 INFO [SettingsFactory] Default batch fetch size: 1 15:35:15,542 INFO [SettingsFactory] Generate SQL with comments: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL updates by primary key: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL inserts for batching: disabled 15:35:15,542 INFO [SettingsFactory] Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 15:35:15,542 INFO [ASTQueryTranslatorFactory] Using ASTQueryTranslatorFactory 15:35:15,542 INFO [SettingsFactory] Query language substitutions: {} 15:35:15,542 INFO [SettingsFactory] JPA-QL strict compliance: enabled 15:35:15,542 INFO [SettingsFactory] Second-level cache: enabled 15:35:15,542 INFO [SettingsFactory] Query cache: enabled 15:35:15,542 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 15:35:15,542 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider 15:35:15,542 INFO [SettingsFactory] Optimize cache for minimal puts: disabled 15:35:15,542 INFO [SettingsFactory] Structured second-level cache entries: disabled 15:35:15,542 INFO [SettingsFactory] Query cache factory: org.hibernate.cache.StandardQueryCacheFactory 15:35:15,542 INFO [SettingsFactory] Statistics: disabled 15:35:15,542 INFO [SettingsFactory] Deleted entity synthetic identifier rollback: disabled 15:35:15,542 INFO [SettingsFactory] Default entity-mode: pojo 15:35:15,542 INFO [SettingsFactory] Named query checking : enabled 15:35:15,542 INFO [SessionFactoryImpl] building session factory 15:35:15,542 INFO [SessionFactoryObjectFactory] Not binding factory to JNDI, no JNDI name configured 15:35:15,542 INFO [UpdateTimestampsCache] starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache 15:35:15,542 INFO [StandardQueryCache] starting query cache at region: org.hibernate.cache.StandardQueryCache

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • easiest and best way to make a server queue java

    - by houlahan
    i have a server at the moment which makes a new thread for every user connected but after about 6 people are on the server for more than 15 mins it tends to flop and give me java heap out of memory error i have 1 thread that checks with a mysql database every 30 seconds to see if any of the users currently logged on have any new messages. what would be the easiest way to implement a server queue? this is the my main method for my server: public class Server { public static int MaxUsers = 1000; //public static PrintStream[] sessions = new PrintStream[MaxUsers]; public static ObjectOutputStream[] sessions = new ObjectOutputStream[MaxUsers]; public static ObjectInputStream[] ois = new ObjectInputStream[MaxUsers]; private static int port = 6283; public static Connection conn; static Toolkit toolkit; static Timer timer; public static void main(String[] args) { try { conn = (Connection) Mysql.getConnection(); } catch (Exception ex) { Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex); } System.out.println("****************************************************"); System.out.println("* *"); System.out.println("* Cloud Server *"); System.out.println("* ©2010 *"); System.out.println("* *"); System.out.println("* Luke Houlahan *"); System.out.println("* *"); System.out.println("* Server Online *"); System.out.println("* Listening On Port " + port + " *"); System.out.println("* *"); System.out.println("****************************************************"); System.out.println(""); mailChecker(); try { int i; ServerSocket s = new ServerSocket(port); for (i = 0; i < MaxUsers; ++i) { sessions[i] = null; } while (true) { try { Socket incoming = s.accept(); boolean found = false; int numusers = 0; int usernum = -1; synchronized (sessions) { for (i = 0; i < MaxUsers; ++i) { if (sessions[i] == null) { if (!found) { sessions[i] = new ObjectOutputStream(incoming.getOutputStream()); ois[i]= new ObjectInputStream(incoming.getInputStream()); new SocketHandler(incoming, i).start(); found = true; usernum = i; } } else { numusers++; } } if (!found) { ObjectOutputStream temp = new ObjectOutputStream(incoming.getOutputStream()); Person tempperson = new Person(); tempperson.setFlagField(100); temp.writeObject(tempperson); temp.flush(); temp = null; tempperson = null; incoming.close(); } else { } } } catch (IOException ex) { System.out.println(1); Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex); } } } catch (IOException ex) { System.out.println(2); Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex); } } public static void mailChecker() { toolkit = Toolkit.getDefaultToolkit(); timer = new Timer(); timer.schedule(new mailCheck(), 0, 10 * 1000); } }

    Read the article

  • dbms_xmlschema fail to validate with complexType

    - by Andrew
    Preface: This works on one Oracle 11gR1 (Solaris 64) database and not on a second and we can't figure out the difference between the two databases. Somehow the complexType causes the validation to fail with this error: ORA-31154: invalid XML document ORA-19202: Error occurred in XML processing LSX-00200: element "shiporder" not empty ORA-06512: at "SYS.XMLTYPE", line 354 ORA-06512: at line 13 But the schema is valid (passes this online test: http://www.xmlme.com/Validator.aspx) -- Cleanup any existing schema begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; -- Define the problem schema (adapted from http://www.w3schools.com/schema/schema_example.asp) begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder"> <xs:complexType> <xs:sequence> <xs:element name="orderperson" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>',owner=>'SCOTT'); end; -- Attempt to validate declare bbb xmltype; begin bbb := XMLType('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> <orderperson>John Smith</orderperson> </shiporder>'); XMLType.schemaValidate(bbb); end; Now if I gut the schema definition and leave only a string in the XML then the validation passes: begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder" type="xs:string"/> </xs:schema>',owner=>'SCOTT'); end; DECLARE xml XMLTYPE; BEGIN xml := XMLTYPE('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> John Smith </shiporder>'); XMLTYPE.schemaValidate(xml); END;

    Read the article

  • Internet Explorer buggy when accessing a custom weblogic provider

    - by James
    I've created a custom Weblogic Security Authentication Provider on version 10.3 that includes a custom login module to validate users. As part of the provider, I've implemented the ServletAuthenticationFilter and added one filter. The filter acts as a common log on page for all the applications within the domain. When we access any secured URLs by entering them in the address bar, this works fine in IE and Firefox. But when we bookmark the link in IE an odd thing happens. If I click the bookmark, you will see our log on page, then after you've successfully logged into the system the basic auth page will display, even though the user is already authenticated. This never happens in Firefox, only IE. It's also intermittent. 1 time out of 5 IE will correctly redirect and not show the basic auth window. Firefox and Opera will correctly redirect everytime. We've captured the response headers and compared the success and failures, they are identical. final boolean isAuthenticated = authenticateUser(userName, password, req); // Send user on to the original URL if (isAuthenticated) { res.sendRedirect(targetURL); return; } As you can see, once the user is authenticated I do a redirect to the original URL. Is there a step I'm missing? The authenticateUser() method is taken verbatim from an example in Oracle's documents. private boolean authenticateUser(final String userName, final String password, HttpServletRequest request) { boolean results; try { ServletAuthentication.login(new CallbackHandler() { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(userName); } if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password.toCharArray()); } } } }, request); results = true; } catch (LoginException e) { results = false; } return results; I am asking the question here because I don't know if the issue is with the Weblogic config or the code. If this question is more suited to ServerFault please let me know and I will post there. It is odd that it works everytime in Firefox and Opera but not in Internet Explorer. I wish that not using Internet Explorer was an option but it is currently the company standard. Any help or direction would be appreciated. I have tested against IE 6 & 8 and deployed the custom provider on 3 different environments and I can still reproduce the bug.

    Read the article

  • Cannot figure out how to take in generic parameters for an Enterprise Framework library sql statemen

    - by KallDrexx
    I have written a specialized class to wrap up the enterprise library database functionality for easier usage. The reasoning for using the Enterprise Library is because my applications commonly connect to both oracle and sql server database systems. My wrapper handles both creating connection strings on the fly, connecting, and executing queries allowing my main code to only have to write a few lines of code to do database stuff and deal with error handling. As an example my ExecuteNonQuery method has the following declaration: /// <summary> /// Executes a query that returns no results (e.g. insert or update statements) /// </summary> /// <param name="sqlQuery"></param> /// <param name="parameters">Hashtable containing all the parameters for the query</param> /// <returns>The total number of records modified, -1 if an error occurred </returns> public int ExecuteNonQuery(string sqlQuery, Hashtable parameters) { // Make sure we are connected to the database if (!IsConnected) { ErrorHandler("Attempted to run a query without being connected to a database.", ErrorSeverity.Critical); return -1; } // Form the command DbCommand dbCommand = _database.GetSqlStringCommand(sqlQuery); // Add all the paramters foreach (string key in parameters.Keys) { if (parameters[key] == null) _database.AddInParameter(dbCommand, key, DbType.Object, null); else _database.AddInParameter(dbCommand, key, DbType.Object, parameters[key].ToString()); } return _database.ExecuteNonQuery(dbCommand); } _database is defined as private Database _database;. Hashtable parameters are created via code similar to p.Add("@param", value);. the issue I am having is that it seems that with enterprise library database framework you must declare the dbType of each parameter. This isn't an issue when you are calling the database code directly when forming the paramters but doesn't work for creating a generic abstraction class such as I have. In order to try and get around that I thought I could just use DbType.Object and figure the DB will figure it out based on the columns the sql is working with. Unfortunately, this is not the case as I get the following error: Implicit conversion from data type sql_variant to varchar is not allowed. Use the CONVERT function to run this query Is there any way to use generic parameters in a wrapper class or am I just going to have to move all my DB code into my main classes?

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • No unique bean of type [javax.persistence.EntityManager] is defined

    - by sebajb
    I am using JUnit 4 to test Dao Access with Spring (annotations) and JPA (hibernate). The datasource is configured through JNDI(Weblogic) with an ORacle(Backend). This persistence is configured with just the name and a RESOURCE_LOCAL transaction-type The application context file contains notations for annotations, JPA config, transactions, and default package and configuration for annotation detection. I am using Junit4 like so: ApplicationContext <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="workRequest"/> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="${database.target}"/> <property name="showSql" value="${database.showSql}" /> <property name="generateDdl" value="${database.generateDdl}" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>workRequest</value> </property> <property name="jndiEnvironment"> <props> <prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop> <prop key="java.naming.provider.url">t3://localhost:7001</prop> </props> </property> </bean> <bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> JUnit TestCase @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:applicationContext.xml" }) public class AssignmentDaoTest { private AssignmentDao assignmentDao; @Test public void readAll() { assertNotNull("assignmentDao cannot be null", assignmentDao); List assignments = assignmentDao.findAll(); assertNotNull("There are no assignments yet", assignments); } } regardless of what changes I make I get: No unique bean of type [javax.persistence.EntityManager] is defined Any hint on what this could be. I am running the tests inside eclipse.

    Read the article

  • NHibernate unable to create SessionFactory

    - by Tyler
    I'm having a bit of trouble setting up NHibernate, and I'm not too sure what the problem is exactly. I'm attempting to save a domain object to the database (Oracle 10g XE). However, I'm getting a TypeInitializationException while trying to create the ISessionFactory. Here is what my hibernate.cfg.xml looks like: <?xml version="1.0" encoding="utf-8"?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2" > <session-factory name="MyProject.DataAccess"> <property name="connection.driver_class">NHibernate.Driver.OracleClientDriver</property> <property name="connection.connection_string"> User ID=myid;Password=mypassword;Data Source=localhost </property> <property name="show_sql">true</property> <property name="dialect">NHibernate.Dialect.OracleDialect</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property> <mapping resource="MyProject/Domain/User.hbm.xml"/> </session-factory> </hibernate-configuration> I created a DAO which I will use to persist domain objects to the database. The DAO uses a HibernateUtil class that creates the SessionFactory. Both classes are in the DataAccess namespace along with the Hibernate configuration. This is where the exception is occuring. Here's that class: public class HibernateUtil { private static ISessionFactory SessionFactory = BuildSessionFactory(); private static ISessionFactory BuildSessionFactory() { try { // This seems to be where the problem occurs return new Configuration().Configure().BuildSessionFactory(); } catch (TypeInitializationException ex) { Console.WriteLine("Initial SessionFactory creation failed." + ex); throw new Exception("Unable to create SessionFactory."); } } public static ISessionFactory GetSessionFactory() { return SessionFactory; } } The DataAccess namespace references the NHibernate DLLs. This is virtually the same setup I've used with Hibernate in Java, so I'm not entirely sure what I'm doing wrong here. Any ideas? Edit The innermost exception is the following: "Could not find file 'C:\Users\Tyler\Documents\Visual Studio 2010\Projects\MyProject\MyProject\ConsoleApplication\bin\Debug\hibernate.cfg.xml'." ConsoleApplication contains the entry point where I've created a User object and am trying to persist it with my DAO. Why is it looking for the configuration file there? The actual persisting takes place in the DAO, which is in DataAccess. Also, when I add the configuration file to ConsoleApplication, it still does not find it.

    Read the article

  • Is it correct or incorrect for a Java JAR to contain its own dependencies?

    - by 4herpsand7derpsago
    I guess this is a two-part question. I am trying to write my own Ant task (MyFirstTask) that can be used in other project's build.xml buildfiles. To do this, I need to compile and package my Ant task inside its own JAR. Because this Ant task that I have written is fairly complicated, it has about 20 dependencies (other JAR files), such as using XStream for OX-mapping, Guice for DI, etc. I am currently writing the package task in the build.xml file inside the MyFirstTask project (the buildfile that will package myfirsttask.jar, which is the reusable Ant task). I am suddenly realizing that I don't fully understand the intention of a Java JAR. Is it that a JAR should not contain dependencies, and leave it to the runtime configuration (the app container, the runtime environment, etc.) to supply it with the dependencies it needs? I would assume if this is the case, an executable JAR is an exception to the rule, yes? Or, is it the intention for Java JARs to also include their dependencies? Either way, I don't want to be forcing my users to be copying-n-pasting 25+ JARs into their Ant libs; that's just cruel. I like the way WAR files are set up, where the classpath for dependencies is defined under the classes/ directory. I guess, ultimately, I'd like my JAR structure to look like: myfirsttask.jar/ com/ --> the root package of my compiled binaries config/ --> config files, XML, XSD, etc. classes/ --> all dependencies, guice-3.0.jar, xstream-1.4.3.jar, etc. META-INF/ MANIFEST.MF I assume that in order to accomplish this (and get the runtime classpath to also look into the classes/ directory), I'll need to modify the MANIFEST.MF somehow (I know there's a manifest attribute called ClassPath, I believe?). I'm just having a tough time putting everything together, and have a looming/lingering question about the very intent of JARs to begin with. Can someone please confirm whether Oracle intends for JARs to contain their dependencies or not? And, either way, what I would have to do in the manifest (or anywhere else) to make sure that, at runtime, the classpath can find the dependencies stored under the classes/ directory? Thanks in advance!

    Read the article

  • A two player game over the intranet..

    - by Santwana
    Hi everybody.. I am a student of 3rd year engineering and only a novice in my programming skills. I need some help with my project.. I wish to develop a two player game to be played over the network (Intranet). I want to develop a simple website with a few html pages for this.My ideas for the project run as follows: 1.People can log in from different systems and check who ever is online on the network currently. the page also shows who is playing with whom. 2.If a person is interested in playing with a player who is currently online, he sends a request of which the other player is somehow notified( using a message or an alert on his profile page..) 3.If the player accepts the request, a game is started. This is exactly where I am clueless.. How can I make them play the game? I need to develop a turn based game with two players, eg chessboard.. how can I do this? The game has to be played live.. and it is time tracked. i need your help with coding the above.. the other features i wish to include are: 4.The game could not be abruptly terminated by any one if the users.The request to terminate the game should be sent to the other player first and only if he accepts can the game be terminated. Whoever wins the game would get a plus 10 on their credit and if he terminated he gets a minus 10. The credits remains constant even if he loses but the success percentage is reduced. 6.The player with highest winning percentage is projected as the player of the week on the home page and he can post a challenge to all others.. I only have an intermediate knowledge of core java and know the basics of Swing and Awt. I am not at all familiar with networking in java right now. I have 5 to 6 weeks of time for developing the project but I hope to learn the things before I start my project. i would prefer to use a lan to illustrate the project and I know only java,jsp,oracle,html and bit of xml to develop my proj. Also I wish to know if I can code this within 6 weeks, would it be too difficult or complicated? Please spare some time to tell me. Please.. please.. I need your suggestions and help.. thank you so much..

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • Grails: GGTS not running on Amazon AWS EC2 anyone else successful?

    - by Anonymous Human
    Im just curious if anyone has had success trying to run the Groovy Grails tool suite on an Amazon AWS EC2 instance with its display exported into your windows machine. If so, I wanted to know which flavor of linux was used on the EC2. I am not having much success with it on the Amazon Linux but haven't tried their Ubuntu instances yet. I got all the way to getting GGTS installed and getting the display exported but when I launch GGTS I get log errors about libraries missing. This is most likely because I didn't use yum to install it so I am probably missing dependencies but I didn't have a choice its not offered as a yum package. Here are my log file errors when I try to launch GGTS: !SESSION 2014-06-08 03:08:04.873 ----------------------------------------------- eclipse.buildId=3.5.1.201405030657-RELEASE-e43 java.version=1.7.0_55 java.vendor=Oracle Corporation Framework arguments: -product org.springsource.ggts.ide Command-line arguments: -os linux -ws gtk -arch x86_64 -product org.springsourc e.ggts.ide !ENTRY org.eclipse.osgi 4 0 2014-06-08 03:08:12.116 !MESSAGE Application error !STACK 1 java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: /home/ec2-user/ggts_sh/ggts-3.5.1.RELEASE/configuration/org.eclipse.osgi /bundles/704/1/.cp/libswt-pi-gtk-4335.so: libgtk-x11-2.0.so.0: cannot open share d object file: No such file or directory no swt-pi-gtk in java.library.path /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk-4335.so: libgtk-x11-2 .0.so.0: cannot open shared object file: No such file or directory Can't load library: /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk.s o at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.gtk.OS.<clinit>(OS.java:45) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:679) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:162) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay( IDEApplication.java:154) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEAppli cation.java:96) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandl e.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runAppli cation(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(Ec lipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:354) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:181) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591) at org.eclipse.equinox.launcher.Main.run(Main.java:1450) at org.eclipse.equinox.launcher.Main.main(Main.java:1426)

    Read the article

  • Converting text into numeric in xls using Java

    - by Work World
    When I create excel sheet through java ,the column which has number datatype in the oracle table, get converted to text format in excel.I want it to remain in the number format.Below is my code snippet for excel creation. FileWriter fw = new FileWriter(tempFile.getAbsoluteFile(),true); // BufferedWriter bw = new BufferedWriter(fw); HSSFWorkbook wb = new HSSFWorkbook(); HSSFSheet sheet = wb.createSheet("Excel Sheet"); //Column Size of excel for(int i=0;i<10;i++) { sheet.setColumnWidth((short) i, (short)8000); } String userSelectedValues=result; HSSFCellStyle style = wb.createCellStyle(); ///HSSFDataFormat df = wb.createDataFormat(); style.setFillForegroundColor(HSSFColor.GREY_25_PERCENT.index); style.setFillPattern(HSSFCellStyle.SOLID_FOREGROUND); //style.setDataFormat(df.getFormat("0")); HSSFFont font = wb.createFont(); font.setColor(HSSFColor.BLACK.index); font.setBoldweight((short) 700); style.setFont(font); int selecteditems=userSelectedValues.split(",").length; // HSSFRow rowhead = sheet.createRow((short)0); //System.out.println("**************selecteditems************" +selecteditems); for(int k=0; k<selecteditems;k++) { HSSFRow rowhead = sheet.createRow((short)k); if(userSelectedValues.contains("O_UID")) { HSSFCell cell0 = rowhead.createCell((short) k); cell0.setCellValue("O UID"); cell0.setCellStyle(style); k=k+1; } ///some columns here.. } int index=1; for (int i = 0; i<dataBeanList.size(); i++) { odb=(OppDataBean)dataBeanList.get(i); HSSFRow row = sheet.createRow((short)index); for(int j=0;j<selecteditems;j++) { if(userSelectedValues.contains("O_UID")) { row.createCell((short)j).setCellValue(odb.getUID()); j=j+1; } } index++; } FileOutputStream fileOut = null; try { fileOut = new FileOutputStream(path.toString()+"/temp.xls"); } catch (FileNotFoundException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } try { wb.write(fileOut); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { fileOut.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }

    Read the article

< Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >