Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 387/469 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • A really smart rails helper needed

    - by Stefan Liebenberg
    In my rails application I have a helper function: def render_page( permalink ) page = Page.find_by_permalink( permalink ) content_tag( :h3, page.title ) + inline_render( page.body ) end If I called the page "home" with: <%= render_page :home %> and "home" page's body was: <h1>Home<h1/> bla bla <%= render_page :about %> <%= render_page :contact %> I would get "home" page, with "about" and "contact", it's nice and simple... right up to where someone goes and changes the "home" page's content to: <h1>Home<h1/> bla bla <%= render_page :home %> <%= render_page :about %> <%= render_page :contact %> which will result in a infinite loop ( a segment fault on webrick )... How would I change the helper function to something that won't fall into this trap? My first attempt was along the lines of: @@list = [] def render_page( permalink ) unless list.include?(permalink) @@list += [ permalink ] page = Page.find_by_permalink content_tag( :h3, page.title ) + inline_render( page.body ) @@list -= [ permalink ] else content_tag :b, "this page is already being rendered" end end which worked on my development environment, but bombed out in production... any suggestions? Thank You Stefan

    Read the article

  • CrystalDecisions.Web reference version changes suddenly when run the webapp

    - by Somebody
    I'm breaking my head with this issue, I have a webapp that has a report using crystal report, in the development pc it works fine, but when copy the same project to another pc, when I load the project (VS 2003) the following msg appears: One or more projects in solution need to be updated to use Crystal Reports XI Release 2. If you choose "Yes", the update will be applied permanently... I choose "Yes" and after that I can see that CrystalDecisions.Web reference has the correct version, and location according to the develpment machine, in this case: 11.5.3300.0. But when run the webapp, I can see when the version and path suddenly changes to: 11.0.3300.0. And when trying to see the report the following error appears: Parser Error Message: The base class includes the field 'CrystalReportViewer1', but its type (CrystalDecisions.Web.CrystalReportViewer) is not compatible with the type of control (CrystalDecisions.Web.CrystalReportViewer). the asp.net has the following: <%@ Register TagPrefix="cr" Namespace="CrystalDecisions.Web" Assembly="CrystalDecisions.Web, Version=11.5.3300.0, Culture=neutral, PublicKeyToken=692fbea5521e1304" %> How is this possible? what's happening here? EDIT This is what I did: the wrong version (11.0.3300.0) was located at: C:\Program Files\Common Files\Business Objects\3.0\managed and the right version (11.5.3300.0) is located at: C:\Program Files\Business Objects\Common\3.5\managed So I just deleted the files of the wrong solution, and I made it work in my new computer, no more errors when running the webapp, the report shows fine. But when try to do the same thing in production server, a different error came out, now an exception: This report could not be loaded due to the following issue: The type initializer for 'CrystalDecisions.CristalReports.Engine.ReportDocument' threw an exception. Any idea what could be causing this error now? Here is the code: Try Dim cr As New ReportDocument cr.Load(strpath) cr.SetDatabaseLogon("usring", "pwding") Select Case rt Case 1 cr.SummaryInfo.ReportTitle = "RMA Ticket" Case 2 cr.SummaryInfo.ReportTitle = "Service Ticket" End Select 'cr.SummaryInfo.ReportTitle = tt cr.SetParameterValue("TicketNo", tn) 'cr.SummaryInfo.ReportComments = comment CrystalReportViewer1.PrintMode = CrystalDecisions.Web.PrintMode.ActiveX CrystalReportViewer1.ReportSource = cr CrystalReportViewer1.ShowFirstPage() 'cr.Close() 'cr.Dispose() Catch ex As Exception MsgBox1.alert("This report could not be loaded due to the following issue: " & ex.Message) End Try

    Read the article

  • Is there a way to prevent Maven Test from rebuilding the database?

    - by Christopher W. Allen-Poole
    I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data. How do I prevent the rebuilding without telling maven to skip testing? The best I could figure was to assign the script to operate in a test database (line breaks added for readability): mvn test -Ddbunit.schema=<database>test -Djdbc.url=jdbc:mysql://localhost/<database>test? createDatabaseIfNotExist=true&amp; useUnicode=true&amp;characterEncoding=utf-8 I can't help but think there must be a better way. I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database. ======= update ======= Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...

    Read the article

  • Cross-Platform Language + GUI Toolkit for Prototyping Multimedia Applications

    - by msutherl
    I'm looking for a language + GUI toolkit for rapidly prototyping utility applications for multimedia installations. I've been working with Max/MSP/Jitter for many years, but I'd like to add a text-based language to my 'arsenal' for tasks apart from 'content production'. (When it comes to actual media synthesis, my choices are clear [SuperCollider + MSP for audio, Jitter + Quartz + openFrameworks for video]). I'm looking for something that maintains some of the advantages of Max, but is lower-level, faster, more cross-platfrom (Linux support), and text-based. Integration with powerful sound/video libraries is not a requirement. Some requirements: Cross-platform (at least OSX and Linux, Windows is a plus) Fast and easy cross-platform GUIs with no platform-specific modification GUI code separated from backend code as much as possible Good for interfacing with external serial devices (micro-controllers) Good network support (UDP/TCP) Good libraries for multi-media (video, sound, OSC) are a plus Asynchronous synchronous UNIX integration is a plus The options that come to mind: AS3/Flex (not a fan of AS3 or the idea of running in the Flash Player) openFrameworks (C++ framework, perhaps a bit too low level [looking for fast development time] and biased toward video work) Java w/ Processing libraries (like openFrameworks, just slower) Python + Qt (is Qt appropriate for rapid prototyping?) Python + Another GUI toolkit SuperCollider + Swing (yucky GUI development) Java w/ SWT Any other options? What do you recommend?

    Read the article

  • Making alpha PNGs with PHP GD

    - by WiseDonkey
    Hello, I've got a problem making alpha PNGs with PHP GD. I don't have imageMagik etc. Though the images load perfectly well in-browser and in GFX programs, I'm getting problems with Flash AS3 (actionscript) understanding the files. It complains of being an unknown type. But, exporting these files from Fireworks to the same spec works fine. So I'm suggesting it's something wrong with the formatting in PHP GD. There seems to be a number of ways of doing this, with several similar functions; so maybe this isn't right? $image_p = imagecreatetruecolor($width_orig, $height_orig); $image = imagecreatefrompng($filename); imagealphablending($image_p, false); ImageSaveAlpha($image_p, true); ImageFill($image_p, 0, 0, IMG_COLOR_TRANSPARENT); imagealphablending($image_p, true); imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width_orig, $height_orig, $width_orig, $height_orig); imagepng($image_p, "new2/".$filename, 0); imagedestroy($image_p); This just takes files it's given and puts them into new files with a specified width/height - for this example it's same as original but in production it resizes, which is why I'm resampling.

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • Using a UITableViewController with a small-sized table?

    - by rpj
    When using a UITableViewController, the initWithStyle: method automatically creates the underlying UITableView with - according to the documentation - "the correct dimensions". My problem is that these "correct dimensions" seem 320x460 (the iPhone's screen size), but I'm pushing this TableView/Controller pair into a UINavigationController which is itself contained in a UIView, which itself is about half the height of the screen. No frame or bounds wrangling I can come up with seems to correctly reset the table's size, and as such it's "too long", meaning there are a collection of rows that are pushed off the bottom of the screen and are not visible nor reachable by scrolling. So my question comes down to: what is the proper way to tell a UITableViewController to resize its component UITableView to a specified rectangle? Thanks! Update I've tried all the techniques suggested here to no avail, but I did find one interesting thing: if I eschew the UINavigationController altogether (which I'm not yet willing to do for production, but as an experiment), and add the table view as a direct subview of the enclosing view I mentioned, the frame size given is respected. The very moment I re-introduce the UINavigationController into the mix, no matter if it is added as a subview before or after the table view, and no matter if alloc/init it before or after the table view is added as a subview, the result is the same as it was before. I'm beginning to suspect UINavigationController isn't much of a team player... Update 2 The suggestion to check frame size after the table view on screen was a good one: turns out that the navigation controller is in fact resizing it some time in between load and display. My solution, hacky at best, has been to cache the frame given on load and to reset it if changed at the beginning of tableView:cellForRowAtIndexPath:. Why there you ask? Because it's the one place I found that worked, that's why! I don't consider this a solution as it's obviously improper, but for the benefit of anyone else reading, it does seem to work.

    Read the article

  • Sql Exception: Error converting data type numeric to numeric

    - by Lucifer
    Hello We have a very strange issue with a database that has been moved from staging to production. The first time the database was moved it was by detaching, copying and reattaching, the second time we tried restoring from a backup of the staging. Both SQL Servers are the same version of MS SQL 2008, running on 64 bit hardware. The code accessing the database is the same build, built using the .net 2.0 framework. Here is the error message and some of the stack trace: Exception Details: System.Data.SqlClient.SqlException: Error converting data type numeric to numeric. Stack Trace: [SqlException (0x80131904): Error converting data type numeric to numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1953274 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4849707 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +204 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) +175 System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +137 Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • Lightweight alternative to Manual/AutoResetEvent in C#

    - by sweetlilmre
    Hi, I have written what I hope is a lightweight alternative to using the ManualResetEvent and AutoResetEvent classes in C#/.NET. The reasoning behind this was to have Event like functionality without the weight of using a kernel locking object. Although the code seems to work well in both testing and production, getting this kind of thing right for all possibilities can be a fraught undertaking and I would humbly request any constructive comments and or criticism from the StackOverflow crowd on this. Hopefully (after review) this will be useful to others. Usage should be similar to the Manual/AutoResetEvent classes with Notify() used for Set(). Here goes: using System; using System.Threading; public class Signal { private readonly object _lock = new object(); private readonly bool _autoResetSignal; private bool _notified; public Signal() : this(false, false) { } public Signal(bool initialState, bool autoReset) { _autoResetSignal = autoReset; _notified = initialState; } public virtual void Notify() { lock (_lock) { // first time? if (!_notified) { // set the flag _notified = true; // unblock a thread which is waiting on this signal Monitor.Pulse(_lock); } } } public void Wait() { Wait(Timeout.Infinite); } public virtual bool Wait(int milliseconds) { lock (_lock) { bool ret = true; // this check needs to be inside the lock otherwise you can get nailed // with a race condition where the notify thread sets the flag AFTER // the waiting thread has checked it and acquires the lock and does the // pulse before the Monitor.Wait below - when this happens the caller // will wait forever as he "just missed" the only pulse which is ever // going to happen if (!_notified) { ret = Monitor.Wait(_lock, milliseconds); } if (_autoResetSignal) { _notified = false; } return (ret); } } }

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Configuring IHS server to direct traffic to the Netty component bound to a port

    - by rbot
    I have a Server Component ( based on Jboss-Netty, which could maintain & handle persistent connections ) deployed in WAS. This component when deployed & initiated within the WAS environment, binds to a port & listens for incoming HTTP connection. [ Why i had to deploy a Netty HTTP Server within WAS is another story - management requirement !! Netty is deployed in WAS as a spring bean which when initiated runs on a port in the machine, independent of WAS ] Clients (mobile app) were able to establish persistent HTTP connections (to the above URL::Port) with this netty component & send/receive requests. Now, I have to replicate this feature in our Production Environment where a IHS Server (Web Server) which sits before the WAS. What i expected is to get a IHS URL which could redirect the incoming packets to the specific PORT on WAS, so that the Client apps can establish a similar persistent http connection. Our Server Admin tried a few combinations and we are not able to identify how to proceed further on this. Your expert ideas would be highly appreciated.

    Read the article

  • How to use HSQLDB in Oracle query syntax mode?

    - by Jan Algermissen
    I am trying to use HSQLDB as an embedded database in a spring application (for testing). As the target production database is Oracle, I would like to use HSQLDBs Oracle syntax mode feature. In the Spring config I use <jdbc:embedded-database type="HSQL" id="dataSource"> </jdbc:embedded-database> <jdbc:initialize-database data-source="dataSource" enabled="true"> <jdbc:script location="classpath:schema.sql"/> </jdbc:initialize-database> And in schema.sql at the top I wrote: SET DATABASE SQL SYNTAX ORA TRUE; However, when running my test, I get the following error: java.sql.SQLException: Unexpected token: DATABASE in statement [SET DATABASE SQL SYNTAX ORA TRUE] Is this a syntax error or a permissions error or something entirely different? Thanks - also for any pointers that might lead to the answer. Given that HSQL is the Spring default for jdbc:embedded-database and given the target is Oracle, this scenario should actually be very common. However, I found nothing on the Web even touching the issue.

    Read the article

  • How to perform Linq select new with datetime in SQL 2008

    - by kd7iwp
    In our C# code I recently changed a line from inside a linq-to-sql select new query as follows: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.Year.ToString() + "-" + p.OrderDate.Value.Month.ToString() + "-" + p.OrderDate.Value.Day.ToString() : "") To: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") The change makes the line smaller and cleaner. It also works fine with our SQL 2008 database in our development environment. However, when the code deployed to our production environment which uses SQL 2005 I received an exception stating: Nullable Type must have a value. For further analysis I copied (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") into a string (outside of a Linq statement) and had no problems at all, so it only causes an in issue inside my Linq. Is this problem just something to do with SQL 2005 using different date formats than from SQL 2008? Here's more of the Linq: dt = FilteredOrders.Where(x => x != null).Select(p => new { Order = p.OrderId, link = "/order/" + p.OrderId.ToString(), StudentId = (p.PersonId.HasValue ? p.PersonId.Value : 0), FirstName = p.IdentifierAccount.Person.FirstName, LastName = p.IdentifierAccount.Person.LastName, DeliverBy = p.DeliverBy, OrderDate = p.OrderDate.HasValue ? p.OrderDate.Value.Date.ToString("yyyy-mm-dd") : ""}).ToDataTable(); This is selecting from a List of Order objects. The FilteredOrders list is from another linq-to-sql query and I call .AsEnumerable on it before giving it to this particular select new query. Doing this in regular code works fine: if (o.OrderDate.HasValue) tempString += " " + o.OrderDate.Value.Date.ToString("yyyy-mm-dd");

    Read the article

  • Calculate an Internet (aka IP, aka RFC791) checksum in C#

    - by Pat
    Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share? Remember, the internet protocol specifies that: "The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero." More explanation can be found from Dr. Math. There are some efficiency pointers available, but that's not really a large concern for me at this point. Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-) Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests. [TestMethod()] public void InternetChecksum_SimplestValidValue_ShouldMatch() { IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros ushort expected = 0xFFFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch() { IEnumerable<byte> value = new byte[]{0xFF}; ushort expected = 0xFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch() { IEnumerable<byte> value = new byte[] { 0x00, 0xFF }; ushort expected = 0xFF00; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); }

    Read the article

  • Rack middleware deadlock

    - by Joel
    I include this simple Rack Middleware in a Rails application: class Hello def initialize(app) @app = app end def call(env) [200, {"Content-Type" => "text/html"}, "Hello"] end end Plug it in inside environment.rb: ... Dir.glob("#{RAILS_ROOT}/lib/rack_middleware/*.rb").each do |file| require file end Rails::Initializer.run do |config| config.middleware.use Hello ... I'm using Rails 2.3.5, Webrick 1.3.1, ruby 1.8.7 When the application is started in production mode, everything works as expected - every request is intercepted by the Hello middleware, and "Hello" is returned. However, when run in development mode, the very first request works returning "Hello", but the next request hangs. Interrupting webrick while it is in the hung state yields this: ^C[2010-03-24 14:31:39] INFO going to shutdown ... deadlock 0xb6efbbc0: sleep:- - /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:31 deadlock 0xb7d1b1b0: sleep:J(0xb6efbbc0) (main) - /usr/lib/ruby/1.8/webrick/server.rb:113 Exiting /usr/lib/ruby/1.8/webrick/server.rb:113:in `join': Thread(0xb7d1b1b0): deadlock (fatal) from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `each' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:23:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:82:in `start' from /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 Something to do with the class reloader in development mode. There is also mention of deadlock in the exception. Any ideas what might be causing this? Any recommendations as to the best approach to debug this?

    Read the article

  • Extract primary key from MySQL in PHP

    - by Parth
    I have created a PHP script and I am lacking to extract the primary key, I have given flow below, please help me in how can i modify to get primary key I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Play framework does not return page and static content

    - by Anton
    I'm using play framework in production for one of my web projects. From time to time Play does not render main page or does not return some of the static content files. I have attached few screenshots below. First screenshot displays firebug console, loading of the site is stucked at the beginning, when serving home page. Second screenshot display fiddler console, when 2 static resources are not loading. This issue is hard to reproduce, it happens 1 of 15 time, I have to delete cache data and reload page. (pressing CRTL-F5 in FF). Issue can be reproduced in most of the browsers. Initially, I was thinging that there is something wrong with hosting provider. But I have changed hosting provided and issue has not gone. Version of the play is 1.2.2. Play is running as standalone server. Not sure, but probably deploying Play to Jetty/Tomcat/Resin would help. Also I'm thinging about rewriting application to another stack (well-known for me - j2ee, spring, whatever) I have no idea how to debug and resolve this issue. Any clue ? Has anyone faced same issue with Play before ?

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • Rhino Mocks Partial Mock

    - by dotnet crazy kid
    I am trying to test the logic from some existing classes. It is not possible to re-factor the classes at present as they are very complex and in production. What I want to do is create a mock object and test a method that internally calls another method that is very hard to mock. So I want to just set a behaviour for the secondary method call. But when I setup the behaviour for the method, the code of the method is invoked and fails. Am I missing something or is this just not possible to test without re-factoring the class? I have tried all the different mock types (Strick,Stub,Dynamic,Partial ect.) but they all end up calling the method when I try to set up the behaviour. using System; using MbUnit.Framework; using Rhino.Mocks; namespace MMBusinessObjects.Tests { [TestFixture] public class PartialMockExampleFixture { [Test] public void Simple_Partial_Mock_Test() { const string param = "anything"; //setup mocks MockRepository mocks = new MockRepository(); var mockTestClass = mocks.StrictMock<TestClass>(); //record beahviour *** actualy call into the real method stub *** Expect.Call(mockTestClass.MethodToMock(param)).Return(true); //never get to here mocks.ReplayAll(); //this is what i want to test Assert.IsTrue(mockTestClass.MethodIWantToTest(param)); } public class TestClass { public bool MethodToMock(string param) { //some logic that is very hard to mock throw new NotImplementedException(); } public bool MethodIWantToTest(string param) { //this method calls the if( MethodToMock(param) ) { //some logic i want to test } return true; } } } }

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • How to hand-over a TCP listening socket with minimal downtime?

    - by Shtééf
    While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too. Some background: I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script. My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally. Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production. The question: I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos. Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.) Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >