Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 281/319 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • How to track a projects extraneous quirks

    - by Steerpike
    Hello, It's possible that the answer to this question may just be standard bug tracking software like jira or fogbugz, but I'm kind of hoping someone out there knows a better system for what I'm describing. My most current project is requiring a lot of setup quirkiness to get into a position where I can actually start a coding section. For example: A series of convoluted internal company commands before I can insitgate an SSH. Making sure any third party classes that make external calls have internal company proxy options setup - while also making sure these setting wont be set up when installed on a production environment Making sure the proxy is set before trying to install pear packages. Other similar things, mostly involving internal IT security and getting it to work with modules and packages. Individually none of these things is a huge deal, and I've written extensive notes to myself regarding exact commands and aditions I've made, but they're currently in a general text document and it's going to be hard to remember exactly where what I need is far down the line. We also have several new staff starting soon and I' rather give them an easier time of setting up their programming environments. Like I said, they aren't 'programming quirks' exactly, but just the constant fiddling that comes about before programming starts in earnest. Any thoughts on the best way to documents these things for my own and future generations sanity?

    Read the article

  • Boost shared_ptr use_count function

    - by photo_tom
    My application problem is the following - I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete. We are storing them in std::vector<boost::shared_ptr<foo>>. My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last. So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data. One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.

    Read the article

  • Checking Selected Radio Button after POST

    - by coffeeaddict
    I've been using ASP.NET controls which perform a lot of the manual for you. But I'm going back to the basics, what everyone else does. I'm using standard input tags. So for example if I have a radio button group and I select a button. When the form submits and does a POST back to whatever action="MyPage.aspx" then to grab and check the radio button's value that was selected is it always done like this below? <label><input type="radio" name="rbGroup" value='<%# ((Action)Container.DataItem).ID %>'/><%# ((Action)Container.DataItem).Name %></label> So here I'm appending the ID to the value. And then when it hits the page that my action specifies, I'm checking to see which was selected by trimming off and getting that ID from the value: string selection = Request.Form["rbGroup"]; string dbRecordIdSelected = int.Parse(selection.Substring(1)); so now I can check the id they selected...that is the ID of the db record that gave that selected radio it's name. Is that how you basically always check what radio was selected by checking the name/value pair that comes across for that selected radioButton group name? And then you can append stuff like IDs or whatever you want to grab and parse out to then do additional logic on the server-side once that header reaches the server and your specified page in the action attribute? The above code is not production code, just something to explain what I'm talking about.

    Read the article

  • How can I add file locations to a database after they are uploaded using a Perl CGI script?

    - by Paul K
    I have a CGI program I have written using Perl. One of its functions is to upload pics to the server. All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db? I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases. Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db. I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).

    Read the article

  • [RAILS] Why is ssl_requirement clearing the Flash? (Chrome Mac)

    - by aaronrussell
    I am using ssl_requirement and since setting it up, my application's Flash messages are disappearing. I've modified the plugin slightly as accounts can optionally have a domain mapped to their account. In that case the non-ssl areas of the site should use the mapped domain, whereas the ssl areas should use the subdomain: def ensure_proper_protocol return true if ssl_allowed? if ssl_required? && !request.ssl? redirect_to "https://#{@account.subdomain}." + APP_CONF[:domain] + request.request_uri flash.keep return false elsif request.ssl? && !ssl_required? redirect_to "http://#{@account.sub_or_mapped_domain}" + request.request_uri flash.keep return false end end The application is broadly split into a website (front end) and an admin (back end). ALL of the admin area uses SSL so in the AdminController I have overwritten ssl_required? with: protected def ssl_required? return false if RAILS_ENV == "test" || RAILS_ENV == "development" true end Interestingly, Flash messages work fine in the development environment, where I am bypassing requiring SSL, but in my production environment where SSL is required, all Flash are gone. Any ideas? EDIT I've done some further testing and can add that this problem is ONLY occurring in Chrome on the Mac. Other Mac browsers and Chrome on windows are displaying the Flash messages as expected. This may be a bug with Chrome on the Mac then...?

    Read the article

  • When to use CTEs to encapsulate sub-results, and when to let the RDBMS worry about massive joins.

    - by IanC
    This is a SQL theory question. I can provide an example, but I don't think it's needed to make my point. Anyone experienced with SQL will immediately know what I'm talking about. Usually we use joins to minimize the number of records due to matching the left and right rows. However, under certain conditions, joining tables cause a multiplication of results where the result is all permutations of the left and right records. I have a database which has 3 or 4 such joins. This turns what would be a few records into a multitude. My concern is that the tables will be large in production, so the number of these joined rows will be immense. Further, heavy math is performed on each row, and the idea of performing math on duplicate rows is enough to make anyone shudder. I have two questions. The first is, is this something I should care about, or will SQL Server intelligently realize these rows are all duplicates and optimize all processing accordingly? The second is, is there any advantage to grouping each part of the query so as to get only the distinct values going into the next part of the query, using something like: WITH t1 AS ( SELECT DISTINCT... [or GROUP BY] ), t2 AS ( SELECT DISTINCT... ), t3 AS ( SELECT DISTINCT... ) SELECT... I have often seen the use of DISTINCT applied to subqueries. There is obviously a reason for doing this. However, I'm talking about something a little different and perhaps more subtle and tricky.

    Read the article

  • Where to start with the development of first database driven Web App (long question)?

    - by Ryan
    Hi all, I've decided to develop a database driven web app, but I'm not sure where to start. The end goal of the project is three-fold: 1) to learn new technologies and practices, 2) deliver an unsolicited demo to management that would show how information that the company stores as office documents spread across a cumbersome network folder structure can be consolidated and made easier to access and maintain and 3) show my co-workers how Test Drive Development and prototyping via class diagrams can be very useful and reduces future maintenance headaches. I think this ends up being a basic CMS to which I have generated a set of features, see below. 1) Create a database to store the site structure (organized as a tree with a 'project group'-project structure). 2) Pull the site structure from the database and display as a tree using basic front end technologies. 3) Add administrator privileges/tools for modifying the site structure. 4) Auto create required sub pages* when an admin adds a new project. 4.1) There will be several sub pages under each project and the content for each sub page is different. 5) add user privileges for assigning read and write privileges to sub pages. What I would like to do is use Test Driven Development and class diagramming as part of my process for developing this project. My problem; I'm not sure where to start. I have read on Unit Testing and UML, but never used them in practice. Also, having never worked with databases before, how to I incorporate these items into the models and test units? Thank you all in advance for your expertise.

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

  • __declspec(dllimport) causes compiler crash on MSVC 2010

    - by Zero
    In a *.cpp file, trying to use a third party lib: #define DLL_IMPORT #include <thirdParty.h> // Third party header has code like: // #ifdef DLL_IMPORT // #define DLL_DECL __declspec(dllimport) // fatal error C1001: An internal error has occurred in the compiler. Alternative: #define NO_DLL #include <thirdParty.h> // Third party header has code like: // #elif defined(NO_DLL) // #define DLL_DECL // Compiles fine, but linker errors as can't find DLL functions // I can reproduce results by remove macros and #define all together and manually editing the third party files to have __declspec(dllimport) or not Has anyone come across anything similar, or can hint at the cause? (which is created using CMake). Above is actual example of 2 line *.cpp that crashes so it's narrowed down to something in the #include. The following also work fine: Compile the examples provided by the third party (they provide a *.sln) that use dllimport/export so it doesn't appear to be the fault of the library Compile the third party lib as part of the production project (so dllexport works fine) I've trawled the project settings pages of the two projects to try and spot differences, but have come up blank. Of course, it's possible I'm missing something as those settings pages are not the easiest to navigate. I'll get access to VS2008 in a day or so, so can compare with that. The third party library is MySql++.

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • Choose 'better' or more familiar technologies for a new project?

    - by John
    I am looking to start work on a brand-new project, something I've been thinking about for a while as my first independent sellable project. It's broadly speaking a web-based service application, and my first choice, server-language is quite easy... I know Java pretty well from working on Java web-apps in the past. However my experience doing web-apps involved JSP, Servlets and JSTL... I know the ideas behind newer technologies like Hibernate/Spring but have never used them. So we wrote our own DAOs, handled AJAX by writing special mini-JSP pages that generated XML/JSON pages, etc. I'm not hugely into the idea that Spring/Hibernate are the 'only' or 'right' way to do any Java web-project, but they are widely used. On the other hand, not only would trying to learn these increase initial development time, but I'd be using my learning attempts to build a production system. I remember one of Joel's early articles said (I'll paraphrase since I can't find it) "regardless what's cool, always use the technologies that the lead developer (or dev team?) knows best" I wondered what people thought about that? ps: should this be CW?

    Read the article

  • Unable to retrieve data, mysql php pdo

    - by Kyle Hudson
    Hi, I have an issue, i cannot get any results from mysql on a production box but can on a development box, we use PHP 5.3 with MySQL (pdo). $sd = $this->dbh->quote($sd); $si_sql = "SELECT COUNT(*) FROM tbl_wl_data WHERE (site_domain = $sd OR siteDomainMasked = $sd);"; if($this->dbh->query($si_sql)->rowCount() > 0) { //gets to here, just doesnt get through the loop $sql = "SELECT pk_aid, site_name, site_css, site_img_sw, supportPhone FROM tbl_wl_data WHERE (site_domain = $sd OR siteDomainMasked = $sd);"; foreach($this->dbh->query($sql) as $wlsd) { //-- fails here if($wlsd['wl_status'] != '1') { require "_domainDisabled.php"; exit; } $this->pk_aid = $wlsd['pk_aid']; $this->siteTitle = $wlsd['site_name']; $this->siteCSS = $wlsd['site_css']; $this->siteImage = $wlsd['site_img_sw']; $this->siteSupportPhone = $wlsd['supportPhone']; } } else { throw new ERR_SITE_NOT_LINKED; } It just doesnt seem to get into the loopk, i ran the query in navicat and it returns the data. Really confused :S

    Read the article

  • Need help setting up a truststore's chain of authority (in Tomcat)

    - by codeinfo
    Lead in ... I'm not an expert, by far, in application security via SSL, but am trying to establish a test environment that includes all possible scenarios we may encounter in production. For this I have a tree of Certificate Authorities (CAs) that are the issuers of an assortment of test client certificates, and node/server certificates (complex test environment representing the various published web services and other applications we integrate with). The structure of these CAs are as follows: Root CA, which has signed/issued Sub CA1, Sub CA2, and Sub CA3. These subs have then signed/issued all certificates of those various nodes and clients in the environment. Now for the question .... In my application's truststore I would like to trust everything signed by Sub CA1, and Sub CA2, but not Sub CA3 (untrusted). Does this mean my truststore should (1) ONLY include Sub CA1 and Sub CA2, or (2) should it include Root CA, Sub CA1, and Sub CA2? I don't know what is the proper way to represent this trust chain in a truststore. In the future I would also like to add a Sub CA4 (also signed/issued by the Root CA), but add that to a Certificate Revocation List (CRL) for testing purposes. Ahead of time, thank you for any help concerning this. It's greatly appreciated.

    Read the article

  • Cookies NULL On Some ASP.NET Pages (even though it IS there!)

    - by DaveDev
    Hi folks I'm working on an ASP.NET application and I'm having difficulty in understanding why a cookie appears to be null. On one page (results.aspx) I create a cookie, adding entries every time the user clicks a checkbox. When the user clicks a button, they're taken to another page (graph.aspx) where the contents of that cookie is read. The problem is that the cookie doesn't seem to exist on graph.aspx. The following code returns null: Request.Cookies["MyCookie"]; The weird thing is this is only an issue on our staging server. This app is deployed to a production server and it's fine. It also works perfectly locally. I've put debug code on both pages: StringBuilder sb = new StringBuilder(); foreach (string cookie in Request.Cookies.AllKeys) { sb.Append(cookie.ToString() + "<br />"); } this.divDebugOutput.InnerHtml = sb.ToString(); On results.aspx (where there are no problems), I can see the cookies are: MyCookie __utma __utmb __utmz _csoot _csuid ASP.NET_SessionId __utmc On graph.aspx, you can see there is no 'MyCookie' __utma __utmb __utmz _csoot _csuid ASP.NET_SessionId __utmc With that said, if I take a look with my FireCookie, I can see that the same cookie does in fact exist on BOTH pages! WTF?!?!?!?! (ok, rant over :-) ) Has anyone seen something like this before? Why would ASP.NET claim that a cookie is null on one page, and not null on another?

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • Google Sites API - File Cabinets: Spaces and extension separator (.) are removed from file names

    - by user1299447
    We have a series of internal reports that we update regularly from our internal databases. We built an application in C# that uploads these reports to a Google Site. Everything works fine, except that the name of the file shown to the final user in the File Cabinet does not include the original spaces nor the extension separator (.) For example, Stock per warehouse.pdf is shown as : Stockperwarehousepdf Below is a simplified version of the code. private AtomEntry UploadAttachment(string filename, AtomEntry parent, string title, string description) { SiteEntry entry = new SiteEntry(); AtomCategory category = new AtomCategory(SitesService.ATTACHMENT_TERM, SitesService.KIND_SCHEME); category.Label = "attachment"; entry.Categories.Add(category); AtomLink parentLink = new AtomLink(AtomLink.ATOM_TYPE, SitesService.PARENT_REL); parentLink.HRef = parent.SelfUri; entry.Links.Add(parentLink); entry.MediaSource = new MediaFileSource(filename, MediaFileSource.GetContentTypeForFileName(filename)); entry.Content.Type = MediaFileSource.GetContentTypeForFileName(filename); entry.Title.Text= title; entry.Summary.Text = description; AtomEntry newEntry = null; newEntry = service.Insert(new Uri(makeFeedUri("content")), entry); } The key line is where the MediaFileSource object is created. Any idea of what we are missing? I've tried all sort of changes :(

    Read the article

  • How do I execute a sql statement through a variable (dyname sql) that tries to do an insert into a variable table?

    - by Testifier
    If I do what I wanna do with a TEMPORARY TABLE, it works fine: DECLARE @CTRFR VARCHAR(MAX) SET @CTRFR = 'select blah blah blah' -- <-- very long select statement. this returns a 0 or some greater number. Please note! --> I NEED THIS NUMBER. IF EXISTS ( SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo][#CTRFRResult]') AND type IN ( N'U' ) ) DROP TABLE [dbo].[#CTRFRResult] CREATE TABLE #CTRFRResult ( CTRFRResult VARCHAR(MAX) ) SET @CTRFR = 'insert into #CTRFRResult ' + @CTRFR EXEC(@CTRFR) The above works fine. The problem is that several databases are using the same TEMP table. Therefore I need to use a VARIABLE table (instead of a temporary table). What I have below is not working because it says that the table must be declared. DECLARE @CTRFRResult TABLE ( CTRFRResult VARCHAR(MAX) ) SET @CTRFR = 'insert into @CTRFRResult ' + @CTRFR -- I think the issue is here. EXEC(@CTRFR) Setting @CTRFR to 'insert into...' is not working because I'm assuming the table name is out of scope. How would I go about mimicking the temporary table code using a variable table?

    Read the article

  • Website hosting from home - IIS6 [closed]

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

  • Migrate from Oracle to MySQL

    - by Cassy
    Hi together. We ran into serious performance problems with our Oracle database and we would like to try to migrate to a MySQL-based database (either MySQL directly or, more preferrable, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here lies the challenge, for this to work, we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into mysql. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? Thanks in advance, Cassy PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)

    Read the article

  • Game design flaw, need help investigating

    - by Snake
    I am not sure if I will be able to get help here but I would give it a shot. The problem is I dont know where the problem is. I have a cards game, in which when you "human" play by dragging a card, then at the end of card being dragged, a handler using postExecute is called with delay of 0.5 sec to start the next player in turn (which is a bot) The bot chooses the color and plays it and at the end of the animation (the card moving to the middle) a handler is started for the next bot and so on. Once the play reaches again to the human player, it waits for his touchs to drag the crad and start the cycle again. The problem that in production, sometimes I am getting errors. The error is resulting in somehow messing up the sequence which ends up with players having more cards than others. After investigation, I found that the transition from human to bot is the problem. Somehow, the transition is happening twice (meaning handler calling post execute twice and the bot is playing twice and everything is messed up). Its been mutliple months and I can't reproduce it (to fix it) and I cna't figure out why this is happeneing? ANY IDEA how I can go after it? How can I get more info about or how can I solve something like that? any pointer would help me

    Read the article

  • asset_packing tiny_mce files

    - by haries
    I use inplacericheditor plugin and tiny_mce Before asset_packager usage, this is how I include the files and they work well <script src="/javascripts/patch_inplaceeditor_1-8-2.js" type="text/javascript"> </script> <script src="/javascripts/patch_inplaceeditor_editonblank_1-8-2.js" type="text/javascript" </script> <script src="/javascripts/tiny_mce/tiny_mce.js" type="text/javascript"></script> <script src="/javascripts/tiny_mce_init.js" type="text/javascript"></script> <script src="/javascripts/inplacericheditor.js" type="text/javascript"></script> My asset_packager.yml section looks like this for the above files: tinyeditor: patch_inplaceeditor_1-8-2 patch_inplaceeditor_editonblank_1-8-2 tiny_mce/tiny_mce tiny_mce_init tiny_mce/langs/en tiny_mce/themes/advanced/editor_template tiny_mce/themes/advanced/langs/en tiny_mce/plugins/save/editor_plugin tiny_mce/plugins/autoresize/editor_plugin tiny_mce/plugins/paste/editor_plugin tiny_mce/plugins/preview/editor_plugin tiny_mce/plugins/table/editor_plugin tiny_mce/plugins/contextmenu/editor_plugin tiny_mce/plugins/emotions/editor_plugin inplacericheditor When I include the asset_packaged file and load the page (in production) I get the following errors: "Ajax.InPlaceEditor is undefined" "Ajax.InPlaceRichEditor is not a constructor" Can anyone shed some light on where I am going wrong or share a better way to asset_package tinymce? Thanks!

    Read the article

  • How can I get sessions to work if I'm using Google App Engine + Django 1.1?

    - by user341642
    Is there a way for me to get sessions working? I know Django has built in session management, and GAE has some tools for it if you're using their watered down version of Django 0.96, but is there a way to get sessions to work if you're trying to use GAE w/ Django 1.1 (i.e. use_library() call). I assume using a db-backed session doesn't work, and a file system backed one won't work b/c we don't have access to the filesystem if we deploy to the Google production servers. This kinda worked (as in didn't crap out) when I used SessionMiddleware backed by a local-memory backed cache and a non-persistent cache (i.e. setting SESSION_ENGINE to django.contrib.sessions.backends.cache). But the session never seems to persist in this case, no matter how I set the timeouts. A new session key is generated on every page reload. Maybe this is b/c the GAE assumes complete statelessness with each request and blows away my local cache? Apologies in advance, I'm pretty new to Python. Any suggestions would be greatly appreciated.

    Read the article

  • Unnecessary Redundancy with Tables.

    - by Stacey
    My items are listed as follows; This is just a summary of course. But I'm using a method shown for the "Detail" table to represent a type of 'inheritence', so to speak - since "Item" and "Downloadable" are going to be identical except that each will have a few additional fields relevant only to them. My question is in this design pattern. This sort of thing appears many, many times in our projects - is there a more intelligent way to handle it? I basically need to normalize the tables as much as possible. I'm extremely new to databases and so this is all very confusing to me. There are 5 items. Awards, Items, Purchases, Tokens, and Downloads. They are all very, very similar, except each has a few pieces of data relevant only to itself. I've tried to use a declaration field (like an enumerator 'Type' field) in conjunction with nullable columns, but I was told that is a bad approach. What I have done is take everything similar and place it in a single table, and then each type has its own table that references a column in the 'base' table. The problem occurs with relationships, or junctions. Linking all of these back to a customer. Each type takes around 2 additional tables to properly junction all of the data together- and as such, my database is growing very, very large. Is there a smarter practice for this kind of behavior? Item ID | GUID Name | varchar(64) Product ID | GUID Name | varchar(64) Store | GUID [ FK ] Details | GUID [FK] Downloadable ID | GUID Name | varchar(64) Url | nvarchar(2048) Details | GUID [FK] Details ID | GUID Price | decimal Description | text Peripherals [ JUNCTION ] ID | GUID Detail | GUID [FK] Store ID | GUID Addresses | GUID Addresses ID | GUID Name | nvarchar(64) State | int [FK] ZipCode | int Address | nvarchar(64) State ID | int Name | varchar(32)

    Read the article

  • How to improve my software project's speed?

    - by Blitzkr1eg
    I'm doing a school software project with my class mates in Java. We store the info on a remote db. When we start the application we pull all the information from the database and transform it into objects to use in our application (using java sql statemens). In the application we edit some of these objects and then when we exit the application we save or update information in the database using Hibernate. As you see we dont use Hibernate for pulling in information, we use it just for saving and updating. We have 2, but very similar problems. The loading of object(when we start the app) and the saving of objects(with Hibernate) in the db(when closing the app) is taking too much time. And our project its not a huge enterprise application, its a quite small app, we just manage some students, teachers, homeworks and tests. So our db is also very very small. How could we increase performance ? later edit: if we use a local database it runs very quick, it just runs slow on remote databases

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >