Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 26/933 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How to make my view better to save Django

    - by user558251
    Hy guys sorry for this post but i need help with my application, i need optimize my view. I have 5 models, how i can do this? def save(request): # get the request.POST in content if request.POST: content = request.POST dicionario = {} # create a dict to get the values in content for key,value in content.items(): # get my fk Course.objects if key == 'curso' : busca_curso = Curso.objects.get(id=value) dicionario.update({key:busca_curso}) else: dicionario.update({key:value}) #create the new teacher Professor.objects.create(**dicionario) my questions are? 1 - How i can do this function in a generic way? Can I pass a variable in a %s to create and get? like this way ? foo = "Teacher" , bar = "Course" def save(request, bar, foo): if request post: ... if key == 'course' : get_course = (%s.objects.get=id=value) %bar ... (%s.objects.create(**dict)) %foo ??? i tried do this in my view but don't work =/, can somebody help me to make this work ? Thanks

    Read the article

  • git: better way for git revert without additional reverted commit

    - by Albert
    I have a commit in a remote+local branch and I want to throw that commit out of the history and put some of them into an own branch. Basically, right now I have: D---E---F---G master And I want: E---G topic / D master That should be both in my local and in the (there is only one, called origin) remote repository. Which is the cleanest way to get that? Also, there are also other people who have cloned that repo and who have checked out the master branch. If I would do such a change in the remote repo, would 'git pull' work for them to get also to the same state?

    Read the article

  • A better way of getting a data table with various column types into string array

    - by Vlad
    This should be an easy one, looks like I got myself too confused. I get a table from a database, data ranges from varchar to int to Null values. Cheap and dirty way of converting this into a tab-delimited file that I already have is this (shrunken to preserve space, ugliness is kept on par with original): da.Fill(dt) ' da - DataAdapter ' ' dt - DataTable ' Dim lColumns As Long = dt.Columns.Count Dim arrColumns(dt.Columns.Count) As String Dim arrData(dt.Columns.Count) As Object Dim j As Long = 0 Dim arrData(dt.Columns.Count) As Object For i = 0 To dt.Rows.Count - 1 arrData = dt.Rows(i).ItemArray() For j = 0 To arrData.GetUpperBound(0) - 1 arrColumns(j) = arrData(j).ToString Next wrtOutput.WriteLine(String.Join(strFieldDelimiter, arrColumns)) Array.Clear(arrColumns, 0, arrColumns.GetLength(0)) Array.Clear(arrData, 0, arrData.GetLength(0)) Next Not only this is ugly and inefficient, it is also getting on my nerves. Besides, I want, if possible, to avoid the infamous double-loop through the table. I would really appreciate a clean and safe way of rewriting this piece. I like the approach that is used here - especially that is trying to solve the same problem that I have, but it crashes on me when I apply it to my case directly.

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Better way to compare neighboring cells in matrix

    - by HyperCube
    Suppose I have a matrix of size 100x100 and I would like to compare each pixel to its direct neighbor (left, upper, right, lower) and then do some operations on the current matrix or a new one of the same size. A sample code in Python/Numpy could look like the following: (the comparison 0.5 has no meaning, I just want to give a working example for some operation while comparing the neighbors) import numpy as np my_matrix = np.random.rand(100,100) new_matrix = np.array((100,100)) my_range = np.arange(1,99) for i in my_range: for j in my_range: if my_matrix[i,j+1] > 0.5: new_matrix[i,j+1] = 1 if my_matrix[i,j-1] > 0.5: new_matrix[i,j-1] = 1 if my_matrix[i+1,j] > 0.5: new_matrix[i+1,j] = 1 if my_matrix[i-1,j] > 0.5: new_matrix[i-1,j] = 1 if my_matrix[i+1,j+1] > 0.5: new_matrix[i+1,j+1] = 1 if my_matrix[i+1,j-1] > 0.5: new_matrix[i+1,j-1] = 1 if my_matrix[i-1,j+1] > 0.5: new_matrix[i-1,j+1] = 1 This can get really nasty if I want to step into one neighboring cell and start from it to do a similar task... Do you have some suggestions how this can be done in a more efficient manner? Is this even possible?

    Read the article

  • How to create a better tables Structure.

    - by user160820
    For my website i have tables Category :: id | name Product :: id | name | categoryid Now each category may have different sizes, for that I have also created a table Size :: id | name | categoryid | price Now the problem is that each category has also different ingredients that customer can choose to add to his purchased product. And these ingredients have different prices for different sizes. For that I also have a table like Ingredient :: id | name | sizeid | categoryid | price I am not sure if this Structure really normalized is. Can someone please help me to optimize this structure and which indexed do i need for this Structure?

    Read the article

  • better way of showing File Upload Errors?

    - by coure06
    Model: public class EmailAttachment { public string FileName { get; set; } public string FileType { get; set; } public int FileSize { get; set; } public Stream FileData { get; set; } } public class ContactEmail: IDataErrorInfo { public string Name { get; set; } public string Email { get; set; } public string Message { get; set; } public EmailAttachment Attachment { get; set; } public string Error { get { return null; } } public string this[string propName] { get { if (propName == "Name" && String.IsNullOrEmpty(Name)) return "Please Enter your Name"; if (propName == "Email"){ if(String.IsNullOrEmpty(Email)) return "Please Provide an Email Address"; else if(!Regex.IsMatch(Email, ".+\\@.+\\..+")) return "Please Enter a valid email Address"; } if (propName == "Message" && String.IsNullOrEmpty(Message)) return "Please Enter your Message"; return null; } }} And my controller file [AcceptVerbs(HttpVerbs.Post)] public ActionResult Con(ContactEmail ce, HttpPostedFileBase file) { return View(); } Now the Problem From the form i am getting Name,Email, Message and uploaded file. I can get validation errors automatically for Name,Email,Message using public string this[string propName]. How can i show validation errors if Attachment.FileSize 10000? If i write its code in public string this[string propName] i alwasy getting Attachment null. How can i fill Attachment Object of ContactEmail so that i can manage all errors on same place?

    Read the article

  • Better understanding of my SQL transactions

    - by Slew Poke
    I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware. Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.

    Read the article

  • Better way to catch trouble points

    - by mac
    User submits a CSV file which is consumed by a program. Values which are used throughout the program come from the CSV, natually if values are missed it is a problem. Below is my solution. Ip on top private List<String> currentFieldName = new ArrayList<String>(); As part of the method: try { setCurrentFieldName("Trim Space"); p.setTrimSpace(currentLineArray[dc.getTRIM_POSITION()].equals("yes") ? true : false); setCurrentFieldName("Ignore Case"); p.setIgnoreCase(currentLineArray[dc.getIGNORE_CASE_POSITION()].equals("yes") ? true : false); } catch (NullPointerException e) { throw new InputSpreadsheetValueUnassignedException("\"Type\" field not set: " + currentFieldName); } And the method which keeps track of a current field being looked at: private void setCurrentFieldName(String fieldName) { currentFieldName.clear(); currentFieldName.add(fieldName); } The idea there is that if user fails to submit value and i will end up getting null, before throwing an exception, i will know what value was not assigned. So, this being said, specific questions: Is what i have shown below an acceptable solution? Can you suggest something more elegant?

    Read the article

  • What is a better, cleaner way of using List<T>

    - by Tim Meers
    I'm looking to implement a few nicer ways to use List in a couple of apps I'm working on. My current implementation looks like this. MyPage.aspx.cs protected void Page_Load(object sender, EventArgs e) { BLL.PostCollection oPost = new BLL.PostCollection(); oPost.OpenRecent(); rptPosts.DataSource = oArt; rptPosts.DataBind(); } BLL Class(s) public class Post { public int PostId { get; set; } public string PostTitle { get; set; } public string PostContent { get; set; } public string PostCreatedDate { get; set; } public void OpenRecentInitFromRow(DataRow row) { this.PostId = (int) row["id"]; this.PostTitle = (string) row["title"]; this.PostContent = (string) row["content"]; this.PostCreatedDate = (DateTime) row["createddate"]; } } public class PostCollection : List<Post> { public void OpenRecent() { DataSet ds = DbProvider.Instance().Post_ListRecent(); foreach (DataRow row in ds.Tables[0].Rows) { Post oPost = new Post(); oPost.OpenRecentInitFromRow(row); Add(oPost); } } } Now while this is working all well and good, I'm just wondering if there is any way to improve it, and just make it cleaner that having to use the two different classes do to something I think can happen in just one class or using an interface.

    Read the article

  • C++ Program performs better when piped

    - by ET1 Nerd
    I haven't done any programming in a decade. I wanted to get back into it, so I made this little pointless program as practice. The easiest way to describe what it does is with output of my --help codeblock: ./prng_bench --help ./prng_bench: usage: ./prng_bench $N $B [$T] This program will generate an N digit base(B) random number until all N digits are the same. Once a repeating N digit base(B) number is found, the following statistics are displayed: -Decimal value of all N digits. -Time & number of tries taken to randomly find. Optionally, this process is repeated T times. When running multiple repititions, averages for all N digit base(B) numbers are displayed at the end, as well as total time and total tries. My "problem" is that when the problem is "easy", say a 3 digit base 10 number, and I have it do a large number of passes the "total time" is less when piped to grep. ie: command ; command |grep took : ./prng_bench 3 10 999999 ; ./prng_bench 3 10 999999|grep took .... Pass# 999999: All 3 base(10) digits = 3 base(10). Time: 0.00005 secs. Tries: 23 It took 191.86701 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. An average of 0.00019 secs & 99 tries was needed to find each one. It took 159.32355 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. If I run the same command many times w/o grep time is always VERY close. I'm using srand(1234) for now, to test. The code between my calls to clock_gettime() for start and stop do not involve any stream manipulation, which would obviously affect time. I realize this is an exercise in futility, but I'd like to know why it behaves this way. Below is heart of the program. Here's a link to the full source on DB if anybody wants to compile and test. https://www.dropbox.com/s/6olqnnjf3unkm2m/prng_bench.cpp clock_gettime() requires -lrt. for (int pass_num=1; pass_num<=passes; pass_num++) { //Executes $passes # of times. clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time start_time = timetodouble(temp_time); //convert time to double, store as start_time for(i=1, tries=0; i!=0; tries++) { //loops until 'comparison for' fully completes. counts reps as 'tries'. <------------ for (i=0; i<Ndigits; i++) //Move forward through array. | results[i]=(rand()%base); //assign random num of base to element (digit). | /*for (i=0; i<Ndigits; i++) //---Debug Lines--------------- | std::cout<<" "<<results[i]; //---a LOT of output.---------- | std::cout << "\n"; //---Comment/decoment to disable/enable.*/ // | for (i=Ndigits-1; i>0 && results[i]==results[0]; i--); //Move through array, != element breaks & i!=0, new digits drawn. -| } //If all are equal i will be 0, nested for condition satisfied. -| clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time draw_time = (timetodouble(temp_time) - start_time); //convert time to dbl, subtract start_time, set draw_time to diff. total_time += draw_time; //add time for this pass to total. total_tries += tries; //add tries for this pass to total. /*Formated output for each pass: Pass# ---: All -- base(--) digits = -- base(10) Time: ----.---- secs. Tries: ----- (LINE) */ std::cout<<"Pass# "<<std::setw(width_pass)<<pass_num<<": All "<<Ndigits<<" base("<<base<<") digits = " <<std::setw(width_base)<<results[0]<<" base(10). Time: "<<std::setw(width_time)<<draw_time <<" secs. Tries: "<<tries<<"\n"; } if(passes==1) return 0; //No need for totals and averages of 1 pass. /* It took ----.---- secs & ------ tries to find --- repeating -- digit base(--) numbers. (LINE) An average of ---.---- secs & ---- tries was needed to find each one. (LINE)(LINE) */ std::cout<<"It took "<<total_time<<" secs & "<<total_tries<<" tries to find " <<passes<<" repeating "<<Ndigits<<" digit base("<<base<<") numbers.\n" <<"An average of "<<total_time/passes<<" secs & "<<total_tries/passes <<" tries was needed to find each one. \n\n"; return 0;

    Read the article

  • FLEX, Tile container: how to better organize the children

    - by Patrick
    hi, I'm using as container for my LinkButtons. I would like to know 1) how can I remove the space between the items in my Tile container. 2) how can I set dynamic width for my items (at the moment they all have the same width regardless the width of the included component) 3) how can I avoid to display scrollbars if the items are not included in the container Thanks

    Read the article

  • Looking for a better alternative to linklabels

    - by user986086
    Please see the two long linklabels below (please ignore the black lines above) Linklabels' length is made dynamically during runtime, and as seen, could be that it's too long and overlapping other text we have (there is another text where you see 'Differences'). My questions are: a) Can I limit the maximum length of a linklabel? b) Is that possible to use a scrollbar with a linklabel (or any similar control) in case that it's too long? e.g. setting it to 200 pixels, and in case it's longer, the user has to scroll the horizontal scroller and see end of the text. I'm using VB.NET on Visual Studio 2008 THANK YOU

    Read the article

  • Becoming better at Vim

    - by Autopulated
    I've been using Vim for quite a long time, but I'm at a level where I use insert mode most of the time, and I still use the arrow keys to move around(!). I feel like I'm not getting the best out of my lovely editor, particularly regarding navigating (especially code), copy & pasting, and doing other manipulations of existing code. (though I am quite comfortable with complicated search/replace patterns). How should I go about learning more? What resources would people recommend?

    Read the article

  • AngularJS: Better way to display success messages

    - by Sup
    $('body').on('click', '#save-btn', function () { $('#greetingsModal').modal('show'); }); <div id="greetingsModal" class="modal hide fade" tabindex="-1" role="dialog" aria- labelledby="myModalLabel" aria-hidden="true"> <div class="alert alert-success"> <a href="../admin/Supplier" class="close" data-dismiss="alert">x</a> <strong>Well done!</strong>. </div> I want to display a popup message using the above styles whenever 'save-btn' is clicked. The above code works fine but there is a lot of time delay by doing it this way. Is there any way to display such a alert message using angular?

    Read the article

  • A better and faster way for eval?

    - by user1707250
    I want to build my queries dynamically and use the following snippet: --snip-- module.exports = { get : function(req, res, next) { var queryStr = "req.database.table('locations').get(parseInt(req.params.id))"; if (req.params.id) { if (req.fields) { queryStr += '.pick(' + req.fieldsStr + ')'; } console.log(queryStr); eval(queryStr).run(function(result) { console.log(result); res.send(result); }); } else if (!req.params.id) { --snip-- However introducing eval opens up my code to injection (req.fields is filled with url parameters) and I see the response time of my app increase from 7 to 11ms Is there a smarter way to accomplish what I did here? Please advice.

    Read the article

  • Lucene boost: I need to make it work better

    - by zvikico
    I'm using Lucene to index components with names and types. Some components are more important, thus, get a bigger boost. However, I cannot get my boost to work properly. I sill get some components appear later (get worse score), even though they have a higher boost. Note that the indexing is done on one field only and I've set the boost to that field alone. I'm using Lucene in Java. I don't think it has anything to do with the field length. I've seen components with the same name (but different type) get the wrong score.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >