Search Results

Search found 10417 results on 417 pages for 'large'.

Page 353/417 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

  • File upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Booking logic and architecture, database sync: Hotels, tennis courts reservation system ...

    - by coulix
    Hello Stackers, Imagine that you want to design a tennis booking system. You have 5 tennis clubs as partenrs with no online api allowing you to check on their side if a court is booked or not: You have to build this part as well. Every time a booking is done on their side you want it to be know by our system. Probably using a POST request form tennis partner to our server. Every time a booking is done on our website, we want to push the booking to their system. The difficulty is that their system need to be online and accessible from outside. Ip may change, we have to use a dns updater. In case their system is not available we still accept the booking and fallback to an async email with 'i confirm booking/reject booking' link sent to the club. I find the whole process quite complex and was wondering about the way online hotel booking system and hotel were working. Do they all have their data open and online ? The good thing is that the data will grow large and fits nicely to some no SQL ;) like couch db

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Reduce file size for charts pasted from excel into word

    - by Steve Clanton
    I have been creating reports by copying some charts and data from an excel document into a word document. I am pasting into a content control, so i use ChartObject.CopyPicture in excel and ContentControl.Range.Paste in word. This is done in a loop: Set ws = ThisWorkbook.Worksheets("Charts") With ws For Each cc In wordDocument.ContentControls If cc.Range.InlineShapes.Count > 0 Then scaleHeight = cc.Range.InlineShapes(1).scaleHeight scaleWidth = cc.Range.InlineShapes(1).scaleWidth cc.Range.InlineShapes(1).Delete .ChartObjects(cc.Tag).CopyPicture Appearance:=xlScreen, Format:=xlPicture cc.Range.Paste cc.Range.InlineShapes(1).scaleHeight = scaleHeight cc.Range.InlineShapes(1).scaleWidth = scaleWidth ElseIf ... Next cc End With Creating these reports using Office 2007 yielded files that were around 6MB, but creating them (using the same worksheet and document) in Office 2010 yields a file that is around 10 times as large. After unzipping the docx, I found that the extra size comes from emf files that correspond to charts that are pasted in using VBA. Where they range from 360 to 900 KB before, they are 5-18 MB. And the graphics are not visibly better. I am able to CopyPicture with the format xlBitmap, and while that is somewhat smaller, it is larger than the emf generated by Office 2007 and noticeably poorer quality. Are there any other options for reducing the file size? Ideally, I would like to produce a file with the same resolution for the charts as I did using Office 2007. Is there any way that uses VBA only (without modifying the charts in the spreadsheet)?

    Read the article

  • Modular Inverse and BigInteger division

    - by dano82
    I've been working on the problem of calculating the modular inverse of an large integer i.e. a^-1 mod n. and have been using BigInteger's built in function modInverse to check my work. I've coded the algorithm as shown in The Handbook of Applied Cryptography by Menezes, et al. Unfortunately for me, I do not get the correct outcome for all integers. My thinking is that the line q = a.divide(b) is my problem as the divide function is not well documented (IMO)(my code suffers similarly). Does BigInteger.divide(val) round or truncate? My assumption is truncation since the docs say that it mimics int's behavior. Any other insights are appreciated. This is the code that I have been working with: private static BigInteger modInverse(BigInteger a, BigInteger b) throws ArithmeticException { //make sure a >= b if (a.compareTo(b) < 0) { BigInteger temp = a; a = b; b = temp; } //trivial case: b = 0 => a^-1 = 1 if (b.equals(BigInteger.ZERO)) { return BigInteger.ONE; } //all other cases BigInteger x2 = BigInteger.ONE; BigInteger x1 = BigInteger.ZERO; BigInteger y2 = BigInteger.ZERO; BigInteger y1 = BigInteger.ONE; BigInteger x, y, q, r; while (b.compareTo(BigInteger.ZERO) == 1) { q = a.divide(b); r = a.subtract(q.multiply(b)); x = x2.subtract(q.multiply(x1)); y = y2.subtract(q.multiply(y1)); a = b; b = r; x2 = x1; x1 = x; y2 = y1; y1 = y; } if (!a.equals(BigInteger.ONE)) throw new ArithmeticException("a and n are not coprime"); return x2; }

    Read the article

  • PHP custom function code optimization

    - by Alex
    Now comes the hard part. How do you optimize this function: function coin_matrix($test, $revs) { $coin = array(); for ($i = 0; $i < count($test); $i++) { foreach ($revs as $j => $rev) { foreach ($revs as $k => $rev) { if ($j != $k && $test[$i][$j] != null && $test[$i][$k] != null) { if(!isset($coin[$test[$i][$j]])) { $coin[$test[$i][$j]] = array(); } if(!isset($coin[$test[$i][$j]][$test[$i][$k]])) { $coin[$test[$i][$j]][$test[$i][$k]] = 0; } $coin[$test[$i][$j]][$test[$i][$k]] += 1 / ($some_var - 1); } } } } return $coin; } I'm not that good at this and if the arrays are large, it runs forever. The function is supposed to find all pairs of values from a two-dim array and sum them like this: $coin[$i][$j] += sum_of_pairs_in_array_row / [count(elements_of_row) - 1] Thanks a lot!

    Read the article

  • Insert only adds upto 1000 records and ignoresall records after that.

    - by user560559
    i have a large database where the client stores personal messages and fire email notifications [if allowed by the users]. certain users have the option of sending messages to their entire network of friends. some users have over 5000 friends in their network so if they select the whole network they'll be sending messages to over 5000 friends and system will store all the messages into a table. the problem is this that it does not insert more than 1000 records and ignores all inserts after the first 1000. i have increased the packet size, bulk_insert_buffer_size but still no luck. since the system stores some of the info in another table for reports, every insert returns its new message id. due to this i can not use the "insert into table (column1,column2) values (value1,value2) , (value1,value2)....etc." table engine is innodb, mysql version is 5.1.3 and is hosted on amazon web services. all i want is to fix this issue of inserting more than 1000 records at a time. as mentioned earlier, it works fine but only up to 1000 records and simply ignores all the records after that. i'm using php foreach(){} to insert message for each friend and if email is available, send notification to the user. this foreach(){} also inserts the same record in another table [with only 3 columns] for generating reports.

    Read the article

  • Deleting list items via ProcessBatchData()

    - by q-tuyen
    You can build a batch string to delete all of the items from a SharePoint list like this: 1: //create new StringBuilder 2: StringBuilder batchString= new StringBuilder(); 3: 4: //add the main text to the stringbuilder 5: batchString.Append(""); 6: 7: //add each item to the batch string and give it a command Delete 8: foreach (SPListItem item in itemCollection) 9: { 10: //create a new method section 11: batchString.Append(""); 12: //insert the listid to know where to delete from 13: batchString.Append("" + Convert.ToString(item.ParentList.ID) + ""); 14: //the item to delete 15: batchString.Append("" + Convert.ToString(item.ID) + ""); 16: //set the action you would like to preform 17: batchString.Append("Delete"); 18: //close the method section 19: batchString.Append(""); 20: } 21: 22: //close the batch section 23: batchString.Append(""); 24: 25: //preform the batch 26: SPContext.Current.Web.ProcessBatchData(batchString.ToString()); The only disadvantage that I can think of right know is that all the items you delete will be put in the recycle bin. How can I prevent that? I also found the solution like below: // web is a the SPWeb object your lists belong to // Before starting your deletion web.Site.WebApplication.RecycleBinEnabled = false; // When your are finished re-enable it web.Site.WebApplication.RecycleBinEnabled = true; Ref [Here](http://www.entwicklungsgedanken.de/2008/04/02/how-to-speed-up-the-deletion-of-large-amounts-of-list-items-within-sharepoint/) But the disadvantage of that solution is that only future deletion will not be sent to the Recycle Bins but it will delete all existing items as well which user do not want. Any idea to prevent not to delete existing items? Many thanks in advance, TQT

    Read the article

  • Keeping User-Input UITextVIew Content Constrained to Its Own Frame

    - by siglesias
    Trying to create a large textbox of fixed size. This problem is very similar to the 140 character constraint problem, but instead of stopping typing at 140 characters, I want to stop typing when the edge of the textView's frame is reached, instead of extending below into the abyss. Here is what I've got for the delegate method. Seems to always be off by a little bit. Any thoughts? - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text { BOOL edgeBump = NO; CGSize constraint = textView.frame.size; CGSize size = [[textView.text stringByAppendingString:text] sizeWithFont:textView.font constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap]; CGFloat height = size.height; if (height > textView.frame.size.height) { edgeBump = YES; } if([text isEqualToString:@"\b"]){ return YES; } else if(edgeBump){ NSLog(@"EDGEBUMP!"); return NO; } return YES; } EDIT: As per Max's suggestion below, here is the code that works: - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text { CGSize constraint = textView.frame.size; NSString *whatWasThereBefore = textView.text; textView.text = [textView.text stringByReplacingCharactersInRange:range withString:text]; if (textView.contentSize.height >= constraint.height) { textView.text = whatWasThereBefore; } return NO; }

    Read the article

  • Time taken for memcpy decreases after certain point

    - by tss
    I ve a code which increases the size of the memory(identified by a pointer) exponentially. Instead of realloc, I use malloc followed by memcpy.. Something like this.. int size=5,newsize; int *c = malloc(size*sizeof(int)); int *temp; while(1) { newsize=2*size; //begin time temp=malloc(newsize*sizeof(int)); memcpy(temp,c,size*sizeof(int)); //end time //print time in mili seconds c=temp; size=newsize; } Thus the number of bytes getting copied is increasing exponentially. The time required for this task also increases almost linearly with the increase in size. However after certain point, the time taken abruptly reduces to a very small value and then remains constant. I recorded time for similar code, copyin data(Of my own type) 5 -> 10 - 2 ms 10 -> 20 - 2 ms . . 2560 -> 5120 - 5 ms . . 20480 -> 40960 - 30 ms 40960 -> 91920 - 58 ms 367680 -> 735360 - 2 ms 735360 -> 1470720 - 2 ms 1470720 -> 2941440 - 2 ms What is the reason for this drop in time ? Does a more optimal memcpy method get called when the size is large ?

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • WIN32 Logon question

    - by Lalit_M
    We have developed a ASP.NET 3.5 web application with Web Server 2008 and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. I need to get the token back after the authentication (authenticated \ password locked \ wrong password ) its futher use and based on logic showing different web pages Has anyone seen this behavior with the Win32 LogonUser method? Please answer the following issue: Is it possible to disable this behavior to create the folder as taking 2.78 MB of space for every new user and it eating my disck space? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. For any solution related to LOGON32_LOGON_BATCH, can you please confirm if that will stop creating the folders at location C:\users. Also for any possible solution I need either I am able to disable the folder to be created at C:\user or Any other option to authenticated user which will not creat folders.

    Read the article

  • power and modulo on the fly for big numbers

    - by user unknown
    I raise some basis b to the power p and take the modulo m of that. Let's assume b=55170 or 55172 and m=3043839241 (which happens to be the square of 55171). The linux-calculator bc gives the results (we need this for control): echo "p=5606;b=55171;m=b*b;((b-1)^p)%m;((b+1)^p)%m" | bc 2734550616 309288627 Now calculating 55170^5606 gives a somewhat large number, but since I have to do a modulooperation, I can circumvent the usage of BigInt, I thought, because of: (a*b) % c == ((a%c) * (b%c))%c i.e. (9*7) % 5 == ((9%5) * (7%5))%5 => 63 % 5 == (4 * 2) %5 => 3 == 8 % 5 ... and a^d = a^(b+c) = a^b * a^c, therefore I can divide b+c by 2, which gives, for even or odd ds d/2 and d-(d/2), so for 8^5 I can calculate 8^2 * 8^3. So my (defective) method, which always cut's off the divisor on the fly looks like that: def powMod (b: Long, pot: Int, mod: Long) : Long = { if (pot == 1) b % mod else { val pot2 = pot/2 val pm1 = powMod (b, pot, mod) val pm2 = powMod (b, pot-pot2, mod) (pm1 * pm2) % mod } } and feeded with some values, powMod (55170, 5606, 3043839241L) res2: Long = 1885539617 powMod (55172, 5606, 3043839241L) res4: Long = 309288627 As we can see, the second result is exactly the same as the one above, but the first one looks quiet different. I'm doing a lot of such calculations, and they seem to be accurate as long as they stay in the range of Int, but I can't see any error. Using a BigInt works as well, but is way too slow: def calc2 (n: Int, pri: Long) = { val p: BigInt = pri val p3 = p * p val p1 = (p-1).pow (n) % (p3) val p2 = (p+1).pow (n) % (p3) print ("p1: " + p1 + " p2: " + p2) } calc2 (5606, 55171) p1: 2734550616 p2: 309288627 (same result as with bc) Can somebody see the error in powMod?

    Read the article

  • How can I manage building library projects that produce both a static lib and a dll?

    - by Scott Langham
    I've got a large visual studio solution with ~50 projects. There are configurations for StaticDebug, StaticRelease, Debug and Release. Some libraries are needed in both dll and static lib form. To get them, we rebuild the solution with a different configuration. The Configuration Manager window is used to setup which projects need to build in which flavours, static lib, dynamic dll or both. This can by quite tricky to manage and it's a bit annoying to have to build the solution multiple times and select the configurations in the right order. Static versions need building before non-static versions. I'm wondering, instead of this current scheme, might it be simpler to manage if, for the projects I needed to produce both a static lib and dynamc dll, I created two projects. Eg: CoreLib CoreDll I could either make both of these projects reference all the same files and build them twice, or I'm wondering, would it be possible to build CoreLib and then get CoreDll to link it to generate the dll? I guess my question is, do you have any advice on how to structure your projects in this kind of situation? Thanks.

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • Where can I find a powerful, standards compliant, web-based interactive org chart API?

    - by Adam Soltys
    Hi, I'm looking to build an interactive web-based org chart for a large organization. I somewhat like the interface at ancestry.com where you can hover over people and pan/zoom around and click on different nodes to make them the root. Ideally, I'd like it if people could belong to multiple organizational entities like committees, working groups, etc. In other words the API should support graphs in general, not just trees. I'd like to be able to visually explode each organizational substructure into substituents by clicking on it, with a nice animation of the employees ballooning or spilling out so you can really interactively drill down through the organization. I found http://code.google.com/apis/visualization/documentation/gallery/orgchart.html but it looks a bit rudimentary. I know there are desktop tools like OrgPlus and Visio that can build static charts but I'm really looking for a free, web-based API with open standards-based output like SVG or HTML5 Canvas elements rather than Flash or some proprietary output. Something I can embed into a custom web application and style myself. Something interactive.

    Read the article

  • How to convert Vector Layer coordinates into Map Latitude and Longitude in Openlayers.

    - by Jenny
    I'm pretty confused. I have a point: x= -12669114.702301 y= 5561132.6760608 That I got from drawing a square on a vector layer with the DrawFeature controller. The numbers seem...erm...awfull large, but they seem to work, because if I later draw a square with all the same points, it's in the same position, so I figure they have to be right. The problem is when I try to convert this point to latitude and longitude. I'm using: map.getLonLatFromPixel(pointToPixel(points[0])); Where points[0] is a geometry Point, and the pointToPixel function takes any point and turns it into a pixel (since the getLonLatFromPixel needs a pixel). It does this by simply taking the point's x, and making it the pixels x, and so on. The latitude and longitude I get is on the order of: lat: -54402718463.864 lng: -18771380.353223 This is very clearly wrong. I'm left really confused. I try projecting this object, using: .transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject()); But I don't really get it and am pretty sure I did it incorrectly, anyways. My code is here: http://pastie.org/909644 I'm sort of at a loss. The coordinates seem consistent, because I can reuse them to get the same result...but they seem way larger than any of the examples I'm seeing on the openLayers website...

    Read the article

  • ASP.NET problem - Firebug shows odd behaviour

    - by Brandi
    I have an ASP.NET application that does a large database read. It loads up a gridview inside an update panel. In VS2008, just running on my local machine, it runs fantastically. In production (identical code, just published and put on one of our network servers), it runs slow as dirt. Debug is set to false, so this is not the cause of the slow down. I'm not an experienced web developer, so besides that, feel free to suggest the obvious. I have been using Firebug to determine what's going on, and here is what that has turned up: On production, there are around 500 requests. The timeline bar is very short. The size column varies from run to run, but is always the same for the duration of the run. Locally, there are about 30 requests. The timeline bar takes up the entire space. Can anyone shed some light on why this is happening and what I can do to fix it? Also, I can't find much of anything on the web about this, so any references are helpful too.

    Read the article

  • VS2008 Link Error Using SafeInt3.hpp in 64bit mode.

    - by photo_tom
    I have the below code that links and runs fine in 32bit mode - #include "safeint3.hpp" typedef SafeInt<SIZE_T> SAFE_SIZE_T; SAFE_SIZE_T sizeOfCache; SAFE_SIZE_T _allocateAmt; Where safeint3.hpp is current version that can be found on Codeplex SafeInt. For those who are unaware of it, safeint is a template class that makes working with different integer types and sizes "safe". To quote channel 9 video on software - "it writes the code that you should". Which is my case. I have a class that is managing a large in-memory cache of objects (6gb) and I am very concerned about making sure that I don't have overflow/underflow issues on my pointers/sizes/other integer variables. In this use, it solves many problems. My problem is coming when moving from 32bit dev mode to 64bit production mode. When I build the app in this mode, I'm getting the following linker warnings - 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyUint64(unsigned __int64 const &,unsigned __int64 const &,unsigned __int64 *)" (?IntrinsicMultiplyUint64@@YA_NAEB_K0PEA_K@Z) already defined in ImageInRamCache.obj; second definition ignored 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyInt64(__int64 const &,__int64 const &,__int64 *)" (?IntrinsicMultiplyInt64@@YA_NAEB_J0PEA_J@Z) already defined in ImageInRamCache.obj; second definition ignored While I understand I can ignore the error, I would like either (a) prevent the warning from occurring or (b) make it disappear so that my QA department doesn't flag it as a problem. And after spending some time researching it, I cannot find a way to do either.

    Read the article

  • Rails : can't write unknown attribute `url'

    - by user2954789
    I am new to ruby on rails,and I am learning by creating a blog. I am not able to save to my blogs table and I get this error "can't write unknown attribute url" Blogs migration :db/migrate/ class CreateBlogs < ActiveRecord::Migration --def change --- create_table :blogs do |t| ---- t.string :title ---- t.text :description ---- t.string :slug ---- t.timestamps --- end --end end Blogs Model :/app/models/blogs.rb class Blogs < ActiveRecord::Base --acts_as_url :title --def to_param ---url --end --validates :title, presence:true end Blogs Controller : /app/controllers/blogs_controller.rb class BlogsController < ApplicationController before_action :require_login --def new --- @blogs = Blogs.new --end --def show ---@blogs = Blogs.find(params[:id]) --end --def create ---@blogs = Blogs.new(blogs_params) --if @blogs.save ---flash[:success] = "Your Blog has been created." ---redirect_to home_path --else ---render 'new' --end -end --def blogs_params ---params.require(:blogs).permit(:title,:description) --end private --def require_login ---unless signed_in? ----flash[:error] = "You must be logged in to create a new Blog" ----redirect_to signin_path ---end --end end Blogs Form:/app/views/blogs/new.html.erb Blockquote <%= form_for @blogs, url: blogs_path do |f| %><br/> <%= render 'shared/error_messages_blogs' %><br/> <%= f.label :title %><br/> <%= f.text_field :title %><br/> <%= f.label :description %><br/> <%= f.text_area :description %><br/> <%= f.submit "Submit Blog", class: "btn btn-large btn-primary" %><br/> <% end %><br/> and I have also added "resources :blogs" to my routes.rb file. I get this error in controller at if @blogs.save

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • Provider not notified from cookbook_file

    - by wittyhandle
    I'm working on an ssl provider using Vagrant (1.0.5) and chef-solo (10.12.0) I have my provider, called ssl within a cookbook called gtm_cq, I define it as such in my cookbook's default recipe: gtm_cq_ssl "author" do # attributes will come later end I then have my cookbook_file like below that should notify my ssl provider's import action once it pushes the cert up to the server: cookbook_file "#{node[:cq][:ssl][:author_cert_location]}/foo.cer" do source "foo.cer" owner "crx" group "root" mode "0644" notifies :import, resources(:gtm_cq_ssl => "author") end When I run this, the foo.cer gets pushed up as expected, but the import action of my ssl provider is never called. The most I see of any reference is these couple of lines in the log (removed log headers): .. cookbook_file[/opt/cq5/author/foo.cer] sending import action to gtm_cq_ssl[author] (delayed) .. Processing gtm_cq_ssl[author] action import (gtm_cq::author line 34) There's a large very obvious log statement as well as the use of another cookbook_file for a test file to push something up to the server. No log statement, no test file pushed. I'm certain too that the foo.cer file is removed from the server before each test. I found that if I edit my notifies line like so with :immediately notifies :import, resources(:gtm_cq_ssl => "author"), :immediately It seems to work. And I suppose this is ok in my particular case, but it would seem something is not right if that's the only way I can call my provider. Any help on this would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >