Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 243/402 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Record management system java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have a custom extension on top of SmartGWT but after some time it has proven not to be flexible enough. I also personally dislike the java-js compilation process and the whole GWT codebase. Not only is the design ugly, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet). I only have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced? What was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • Site in subdomain (MaraDNS + Nginx)

    - by Grzegorz
    Welcome, Actually I'm doing some experiments on my VPS with Ubuntu. I've installed MaraDNS with Nginx. At this moment I've correctly launch static site which is available from Internet (maindomain.com). In next step I want to add new site which will be available in subdomain, for example dev.maindomain.com. I've tried to db.maindomain.com file (used by MaraDNS): maindomain.com. xxx.xxx.xxx.xxx www.maindomain.com. CNAME maindomain.com. dev.maindomain.com. xxx.xxx.xxx.xxx Where xxx.xxx.xxx.xxx is VPS IP address. In nginx.conf I have: server { listen 80; server_name maindomain.com; access_log /var/log/nginx/maindomain.com.log location / { root /var/www/maindomain.com; index index.html; } } server { listen 80; server_name dev.maindomain.com; access_log /var/log/nginx/dev.maindomain.com.log location / { root /var/www/dev.maindomain.com; index index.html; } } With this configuration maindomain.com works properly, but dev.maindomain.com isn't available. When I try: ping dev.maindomain.com then I get my xxx.xxx.xxx.xxx IP. Do you have any suggestions how can I resolve this problem?

    Read the article

  • Running Non-profit Web Applications on Cloud/Dedicated Hosting [closed]

    - by cillosis
    Possible Duplicate: How to find web hosting that meets my requirements? I often times build web applications purely because I enjoy it. I like building useful tools or open source applications that don't come with a price tag. That being said, many of these applications can be quite complex requiring services beyond shared hosting (ex. specific PHP extensions). This leaves me with two options: Make the web application less complex and run on shared hosting. Fork out money for cloud or dedicated/VPS hosting. Considering the application is free (I don't make money off of it intentionally), the money for hosting comes out of my own pocket. I know I am not alone in this sticky situation. So the question is, what are the hosting options that provide more advanced features such as shell access via SSH, ability to install specific software/extensions (ex. if I wish to use a NoSQL DB such as Redis, MongoDB, or Cassandra), etc., at a free or low price point? I know free usually equates to bad/unreliable hosting -- but it's not always the case. There are a couple providers with free plans I know of: Amazon EC2 - Free micro-instance for 1 year AppHarbor - Cloud based .NET web application hosting w/ free plan. What else is available for hosting of non-profit applications?

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • What should be the architecture of an urban game system?

    - by pmichna
    I'm going to develop an urban game using a telco API for phone geolocation and sending/receiving messages. A player would pick up one of the scenarios, move around the city and when he hits a given location, he gets a message and possibly has to answer it. I'm wondering, what approach would be the best in my case. I came up with this general idea: Web application as a user interface (user registration, players ranking, scenarios editing) written in Ruby on Rails. Game server (hosting games, game logic like checking players location, sending and receiving messages) written in Ruby. Database (users, scores, scenarios etc.), probably MySQL or someother open source DB. I want to learn Ruby and RoR, that's why I chose these language and framework. Do you think it's a good choice for a game server? Another question: is this project division good? I mean, I have little experience with Ruby and Rails - that's why I'm asking. Maybe it's better to have web application merged with game server and somehow have the server hosting RoR application do the tasks like mobile phone pinging and message sending? How would that be performed? Maybe this is worth mentioning: the API is RESTful, most results are JSON, few are XML.

    Read the article

  • Can AJAX in a CMS slow down your server

    - by Saif Bechan
    I am currently developing some plugins for WordPress, and I was wondering which route to take. Let's take an example, you want to display the last 3 tweets on your page. Option 1 You do things the normal way inside WordPress. Someone enters the website, while generating the page, you fetch the tweets in php via the twitter api, and just display them where you want. Now the small problem with this is, that you have to wait for the response from twitter. This takes a few ms. NO real problem, but this is question is just out of curiosity. Option 2 Here you don't do anything in WordPress on the initial load, but you do have the API inside. Now you just generate the page, and as soon as the page is done on the client side, you do a small AJAX call back to the server via a WordPress plugin, to fetch your latest tweets. Also called asynchronously. Now the problem with this IMO is that you have much more stress on your server. For starters you have two HTTP requests instead of one. Secondly the WordPress core has to load two times instead of one. Other options Now I know there are a lot of other options: 1) Getting the tweets directly via javascript, no stress on the server at all. 2) Cache the tweets so they are fetched from the DB instead of using the API every time. 3) Getting the tweets from an ajax call that is not a WordPress plugin. 4) Many more. My Question Now my question is if you only compare 1 and 2, which would be a better choice.

    Read the article

  • How to ...set up new Java environment - largely interfaces...

    - by Chris Kimpton
    Hi, Looks like I need to setup a new Java environment for some interfaces we need to build. Say our system is X and we need to interfaces to systems A, B and C. Then we will be writing interfaces X-A, X-B, X-C. Our system has a bus within it, so the publishing on our side will be to the bus and the interface processes will be taking from the bus and mapping to the destination system. Its for a vendor based system - so most of the core code we can't touch. Currently thinking we will have several processes, one per interface we need to do. The question is how to structure things. Several of the APIs we need to work with are Java based. We could go EJB, but prefer to keep it simple, one process per interface, so that we can restart them individually. Similarly SOA seems overkill, although I am probably mixing my thoughts about implementations of it compared to the concepts behind it... Currently thinking that something Spring based is the way to go. In true, "leverage a new tech if possible"-style, I am thinking maybe we can shoe horn some jruby into this, perhaps to make the APIs more readable, perhaps event-machine-like and to make the interface code more business-friendly, perhaps even storing the mapping code in the DB, as ruby snippets that get mixed in... but thats an aside... So, any comments/thoughts on the Spring approach - anything more up-to-date/relevant these days. EDIT: Looking a JRuby further, I am tempted to write it fully in JRuby... in which case do we need any frameworks at all, perhaps some gems to make things clearer... Thanks in advance, Chris

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • Combo/Input LOV displaying non-reference key value

    - by [email protected]
    Its a very common use-case of LOV that we want to diplay a non key value in the LOV but store the key value in the DB. I had to do the same in a sample application I was building. During implementation of this, I realized that there are multiple ways to achieve this.I am going to describe each of these below.Example : Lets take an example of our classic HR schema. I have 2 tables Employee and Department where Dno is the foreign key attribute in Employee that references Department table.I want to create a LOV for Deparment such that the List always displays Dname instead of Dno. However when I update it, it it should update the reference key Dno.To achieve this I had 3 alternative1) Approach 1 :Create a composite VO and add the attributes from Department into Employee using a join.Refer the blog http://andrejusb.blogspot.com/2009/11/defining-lov-on-reference-attribute-in.htmlPositives :1. Easy to implement and use.2. We can use this attribute directly in queries defined on new attribute i.e If i have to display this inside query panel.Negative : We have to create an additional Join on the VO.Ex:SELECT Employees.EMPLOYEE_ID,        Employees.FIRST_NAME,        Employees.LAST_NAME,        Employees.EMAIL,        Employees.PHONE_NUMBER,       Department.Dno,        Department.DnameFROM EMPLOYEES Employees, Department Department WHERE Employees.Dno = Department .Dno2) Approach 2 :

    Read the article

  • Need assistance matching a general theme style as well as eCommerce capability

    - by humble_coder
    I'm in the process of acquiring a new design client. They are getting into the business of "auto parts wholesaling" and they want a storefront. My preference is/was to create something from scratch. However, here is an established trend in their particular market (similar parts, layout, etc). They insist on following the existing visual trend, as per the following: http://www.xtremediesel.com/ http://www.thoroughbreddiesel.com/ http://www.alligatorperformance.com/ My plan of attack at this point is to find a comparable WP theme and a flexible (but useful) backend/product management. Their current demo site (which their previous developer made a stab at) is using Pinnacle Cart. It is no where near what they need, nor is it intuitive to work with. I was actually considering Magento for its greater abilities but I'm still considering options. That said, my two primary dilemmas are as follows: 1) I need a theme that mimics the general style of those listed. They explicitly said they didn't want anything too clean (e.g. ThemeForest, Woothemes) as it "wasn't rugged or busy looking enough" for their field. 2) I need a WP/Magento/WP e-Commerce (or any one of a host of other) plugin that will allow for bulk import/update of nearly 200,000 products, descriptions and images. I'm not opposed to manually interfacing with the DB for import, but in the end, I need a store/system that doesn't needlessly add 50 tables to accommodate some "wet behind the ears" concept of table normalization and is easy to add to. Anyway, if anyone has any quality suggestions regarding either of these issues, it would be most appreciated. Best.

    Read the article

  • Declarative View Objects (VOs) for better ADF performance

    - by Shay Shmeltzer
    Just got back from ODTUG's kscope13 conference which had a lot of good deep ADF content. In one of my session I ran out of time to do one of my demos, so I wanted to share it here instead. This is a demo of how Declarative View Objects can increase your application's performance. For those who are not familiar with declarative VOs, those are VOs that don't actually specify a hard coded query. Instead ADF creates their query at runtime, and it does it based on the data that is requested in your UI layer. This can be a huge saver of both DB resources and network resources. More in the documentation. Here is a quick example that shows you how using such a VO can automatically switch to a simpler SQL instead of a complex join when needed. (note while I demo with 11.1.2.* the feature is there in 11.1.1.* versions also). The demo also shows you how you can monitor the SQL that ADF BC issues to the database using the WebLogic logging feature in JDeveloper. As a side note, I would have loved to see more ADF developers attending Kscope. This demo was part of the "ADF intro" track at Kscope, In the advanced ADF track you would have been treated to a full tuning session about ADF with lots of other tips. Consider attending Kscope next year - it is going to be in Seattle this time.

    Read the article

  • RMS java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have custom extension on top of SmartGWT but after some time it has proven not to be enough flexible. I also personally dislike that java-js compilation process and the whole GWT codebase. Not only its ugly designed, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet) I only does have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it kinda hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced, what was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • Oracle Developer Days 2013

    - by Anne Manke
    Die Oracle Datenbank in der Praxis Was steckt in den Editionen? Einsatzgebiete, Tipps und Tricks zum Mitnehmen, inkl. Ausblick auf neue Funktionen Die Einsatzgebiete für die Oracle Datenbank sind vielfältig, und so bietet Oracle seine marktführende Datenbank in unterschiedlichen Editionen an. Über 30 Jahre Erfahrung in der Weiterentwicklung haben zu einer Fülle von nützlichen Features geführt, welche in den verschiedenen Ausführungen sinnvoll aufgeteilt sind. Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab. In dieser speziell von der BU DB zusammengestellten Veranstaltung werden wir Sie neben vielen Tipps und Tricks zu folgenden Themen auf den neuesten Stand bringen: Die Unterschiede der Editionen und ihre Geheimnisse Umfangreiche Basisausstattung auch ohne Option Performance und Skalierbarkeit in den einzelnen Editionen Kosten- und Ressourceneinsparung leicht gemacht Sicherheit in der Datenbank Steigerung der Verfügbarkeit mit einfachen Mitteln Der Umgang mit großen Datenmengen Cloud Technologien in der Oracle Datenbank Termine 23.01.2013: Oracle Niederlassung Stuttgart Liebknechtstr. 35 D-70565 Stuttgart [Anmeldung per Email] 30.01.2013: Oracle Niederlassung Potsdam Schiffbauergasse 14 D-14467 Potsdam [Anmeldung per Email] 05.02.2013: Oracle Niederlassung Düsseldorf Hamborner Str. 51 D-40472 Düsseldorf [Anmeldung per Email] Anmeldung Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos! Per Mail an Barbara Frank, ORACLE Deutschland B.V. & Co KG Per Telefon: +49 (0)711 72840-211 Agenda 10:00 Beginn der Veranstaltung Die Oracle Datenbank in ihren Editionen im Überblick OracleXE, SE1, SE, EE: Wer braucht was? Was sind die Unterschiede ...? Die Standard Edition - Eine umfangreiche Grundausstattung SQL und PL/SQL: Mehr als SELECT, Application Express, Oracle TEXT und mehr ... Mittagspause Mehr Performance: Die Sportausstattung in der Enterprise Edition Performante Statementausführung, Garantierte Ressourcenverwendung, Speicherplatz sparen ... Mehr Sicherheit: Die Sicherheitsausstattung in der Enterprise Edition Mandantenfähigkeit out-of-the-box, Audit-Möglichkeiten Mehr Verfügbarkeit: Die Mobilitätsausstattung in der Enterprise Edition Flashback Database, Möglichkeiten mit Data Guard, ... 17:00: Ende der Veranstaltung Wir freuen uns auf Sie!

    Read the article

  • POST attack on my website

    - by benhowdle89
    Hi, I have a site (humanisms.co.uk) which incorporates a voting system, ie. user clicks "Up" and it sends a parameter to a PHP script via AJAX, the PHP inserts vote into MYSQL db and the new "Up" vote is sent back to the page to update the vote count. This is working great but i've noticed that the number of votes for one of my questions shot up last night. I viewed my webhosts access logs and saw this line: 108.27.195.232 - - [03/Mar/2011:15:20:18 +0000] "POST /vote.php HTTP/1.1" 200 2 "http://www.humanisms.co.uk/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.114 Safari/534.16" This is repeated well over 100 times and sometimes more than once a second. Now i know they probably arent sitting there clicking Vote but running some sort of PHP loop? I'm not worried about SQL injection but what can i do to prevent this same IP address from doing this or what can i do in general to avoid this scenario. I should also say that there's no login so anyone can click using the voting system. Thanks

    Read the article

  • How to deal with colleagues refuse to follow practices?

    - by Adrian Shum
    I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: "You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as "foreign key", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of maintenance. And he said: "I know it is not good practice, but good practice is not golden rule" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention.

    Read the article

  • Intel 82576 Network card

    - by No1_Melman
    I have an Intel dual port pcie NIC card with two 82576 interfaces according to ubuntu 12.04. I run the command sudo lshw -html > /home/melman/Documents/hardware.html and it shows both of the interfaces but they're grayed out?! How can enable them? ifconfig output: bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.100.2 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr e0:69:95:d1:db:ff inet addr:192.168.10.63 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::e269:95ff:fed1:dbff/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2903 errors:0 dropped:0 overruns:0 frame:0 TX packets:2627 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1524738 (1.5 MB) TX bytes:430196 (430.1 KB) Interrupt:20 Memory:f7f00000-f7f20000 eth3 Link encap:Ethernet HWaddr 00:50:b6:50:a7:f9 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:1b:21:6e:99:77 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:f7c00000-f7c20000 eth5 Link encap:Ethernet HWaddr 00:1b:21:6e:99:76 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:f7c20000-f7c40000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:246 errors:0 dropped:0 overruns:0 frame:0 TX packets:246 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:17584 (17.5 KB) TX bytes:17584 (17.5 KB)

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Firefox dependencies problem on 12.10

    - by theshu
    I did a fresh install of Ubuntu 12.10 today and now am getting the following error: You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: firefox-globalmenu : Depends: firefox (= 17.0.1+build1-0ubuntu0.12.10.1) but 16.0.1+build1-0ubuntu1 is installed E: Unmet dependencies. Try using -f. When I run the sudo apt-get -f install...I get the following after I type the "y": (Reading database ... 179765 files and directories currently installed.) Preparing to replace firefox 16.0.1+build1-0ubuntu1 (using .../firefox_17.0.1+build1-0ubuntu0.12.10.1_i386.deb) ... Unpacking replacement firefox ... dpkg-deb (subprocess): decompressing archive member: internal gzip read error: '<fd:4>: incorrect data check' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/firefox_17.0.1+build1-0ubuntu0.12.10.1_i386.deb (--unpack): cannot copy extracted data for './etc/apparmor.d/usr.bin.firefox' to '/etc/apparmor.d/usr.bin.firefox.dpkg-new': unexpected end of file or stream Please restart all running instances of firefox, or you will experience problems. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/firefox_17.0.1+build1-0ubuntu0.12.10.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) This error is preventing me from installing Synaptic package manager and is also preventing me from installing anything with the Software center or Update manager. I tried uninstalling Firefox and also upgrading Firefox to no avail as well. Open to suggestions.

    Read the article

  • RoundhousE now supports Oracle, SQL2000

    - by Robz / Fervent Coder
    RoundhousE, the database migration software that is based on sql scripts has added support for Oracle and SQL 2000.  There have also been numerous other little things, including better logging and a script run errors table. The script errors table captures what went wrong when/if your scripts are not quite up to par or there is some other issue. A special thanks goes out to http://twitter.com/PascalMestdach and http://twitter.com/jochenjonc. They worked hard on this and all I did was provide guidance and help bring it back to the trunk. This is what an entry in the database looks like: This is a preview of new log: ================================================== Versioning ================================================== Attempting to resolve version from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml using //buildInfo/version. Found version 0.5.0.188 from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml. Migrating TestRoundhousE from version 0 to 0.5.0.188. Versioning TestRoundhousE database with version 0.5.0.188 based on http://roundhouse.googlecode.com/svn. ================================================== Migration Scripts ================================================== Looking for Update scripts in "C:\code\roundhouse\code_drop\sample\deployment\..\db\TestRoundhousE\up". These should be one time only scripts. -------------------------------------------------- Running 0001_CreateTables.sql on (local) - TestRoundhousE. Running 0002_ChangeTable.sql on (local) - TestRoundhousE. Running 0003_TestBatchSplitter.sql on (local) - TestRoundhousE. -------------------------------------------------- But what are you waiting for? Head out and grab the latest release today!

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

  • does my js replace view?

    - by Milla Well
    I am writing a web application which is based on Codeigniter and jQuery. I primarily use ajax to call my controller functions and it turned out, that there are just 4 view*.php files, because most of my contoller functions return JSON data, which is processed in my jQuery. So my actual code is divided in kind of MVCC model: Codeigniter model (db, computations) Codeigniter controller (filtering, xss-cleaning, checking permissions, call model functions) jQuery controller (callback functions) jQuery view (adding/removing classes, appending elements,... ) So I violate the paradigm of not using the echo function in my Codeiginter controller and simply call echo json_encode($result); because it doesn't make any sense to me to create a view*.php file for one loc. Especially because all the regular view*.php stuff is covered in my jQuery view. I was wondering if I am missing something out, or if there is a way to integrate this jQuery-controller in my Codeigniter. I found some words on this topic, but this seems pretty handmade. Are there some neat solutions? Does a MVCC model make sense?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >