Search Results

Search found 39473 results on 1579 pages for 'johny why'.

Page 541/1579 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • CarrierWave and nested forms saving empty image object if photo :title is included in form

    - by Wasabi Developer
    I'm after some advice in regards to handling nested form data and I would be ever so grateful for any insights. The trouble is I'm not 100% sure why I require the following code in my model accepts_nested_attributes_for :holiday_image, allow_destroy: true, :reject_if => lambda { |a| a[:title].blank? } If I don't understand why I require to tact on on my accepts_nested_attributes_for association: :reject_if => lambda { |a| a[:title].blank? } If I remove this :reject_if lambda, it will save a blank holiday photo object in the database. I presume because it takes the :title field from the form as an empty string? I guess my question is, am I doing this right or is there a better way of this this within nested forms if I want to extend my HolidayImage model to include more strings like description, notes? Sorry If I can't be more succinct. My simple holiday app. # holiday.rb class Holiday < ActiveRecord::Base has_many :holiday_image accepts_nested_attributes_for :holiday_image, allow_destroy: true, :reject_if => lambda { |a| a[:title].blank? } attr_accessible :name, :content, :holiday_image_attributes end I'm using CarrierWave for image uploads. # holiday_image.rb class HolidayImage < ActiveRecord::Base belongs_to :holiday attr_accessible :holiday_id, :image, :title mount_uploader :image, ImageUploader end Inside my _form partial there is a field_for block <h3>Photo gallery</h3> <%= f.fields_for :holiday_image do |holiday_image| %> <% if holiday_image.object.new_record? %> <%= holiday_image.label :title, "Image Title" %> <%= holiday_image.text_field :title %> <%= holiday_image.file_field :image %> <% else %> Title: <%= holiday_image.object.title %> <%= image_tag(holiday_image.object.image.url(:thumb)) %> Tick to delete: <%= holiday_image.check_box :_destroy %> <% end %> Thanks again for your patience.

    Read the article

  • ObjectDisposedException when .Show()'ing a form that shouldn't be disposed.

    - by user320781
    ive checked out some of the other questions and obviously the best solution is to prevent the behavior that causes this issue in the first place, but the problem is very intermittent, and very un-reproduceable. I basically have a main form, with sub forms. The sub forms are shown from menus and/or buttons from the main form like so: private void myToolStripMenuItem_Click(object sender, EventArgs e) { try { xDataForm.Show(); xDataForm.Activate(); } catch (ObjectDisposedException) { MessageBox.Show("ERROR 10103"); ErrorLogging newLogger = new ErrorLogging("10103"); Thread errorThread = new Thread(ErrorLogging.writeErrorToLog); errorThread.Start(); } } and the sub forms are actually in the main form(for better or worse. i would actually like to change this but would be a considerable amount of time to do so): public partial class FormMainScreen : Form { Form xDataForm = new xData(); ...(lots more here) public FormMainScreen(int pCount, string pName) { InitializeComponent(); ... } ... } The Dispose function for the sub form is modified so that, the 'close' and 'X' buttons actually hide the form so we dont have to re-create it every time. When the main screen closes, it sets a "flag" to 2, so the other forms know that it is actually ok to close; protected override void Dispose(bool disposing) { if (FormMainScreen.isExiting == 2) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } else { if (xData.ActiveForm != null) { xData.ActiveForm.Hide(); } } } So, the question is, why would this work over and over and over again flawlessly, but, literally, about every 1/1000 of the time, cause an exception, or rather, why is my form being disposed? I had a suspicion that the garbage collector was getting confused, because it occurs slightly more frequently after it has been running for many hours.

    Read the article

  • what is a RoR best practice? match by id or different column?

    - by Omnipresent
    I had a terrible morning. Lots of emails floating around about why things don't work. Upon investigating I found that there is a data mismatch which is causing errors. Scenario Customer and Address are two tables. Customer contains class Customer < ActiveRecord::Base has_one :address, :foreign_key => "id" end Address Contains class Address < ActiveRecord::Base belongs_to :customer, :foreign_key => "cid" end So the two tables match on id which is the default and that column is auto incremented. Problem on the edit Page we have some code like this. params[:line1] = @customer.first.address.line1 It fails because no matching record is found for a customer in the address table. I don't know why this is happening. It seems that over time a lot of records did not get added to Address table. Now problem is that when a new Customer is added (say with id 500) the Address will be added with some other id (say 425) ...now you don't know which address belongs to which customer. Question Being new to Rails, I am asking whether it is always considered good to create an extra column for joining of the records, rather than depending on the column that is automatically incremented? If I had a seperate column in Address table where I would manually insert the recently added customers id then this issue would not have come up.

    Read the article

  • The Classic jQuery Tabs with Bing Maps Issue

    - by Justin
    Hello, I know that there are multiple issues with jQuery Tabs and using Maps. And I have seen the multiple fixes and I am half-way there. But I have the most obscure issue and hoping that someone might understand why. This is my code for the tabs $("#contactTabs").tabs({ spinner: 'Loading <img src="../images/icons/ajax-loader.gif" />' }); $('#contactTabs').bind('tabsshow', function(event, ui) { if (ui.panel.id == "Map") { GetMap(); } }); Which currently does not work. But I was doing some testing and added in an ALERT() to see if the "GetMap()" was even attempting to be loaded... so this was the code that I tested with, and it works just fine. $("#contactTabs").tabs({ spinner: 'Loading <img src="../images/icons/ajax-loader.gif" />' }); $('#contactTabs').bind('tabsshow', function(event, ui) { if (ui.panel.id == "Map") { alert("load map"); GetMap(); } }); So I haven't a clue why adding the ALERT() causes the map to load and removing the ALERT just doesn't load the map at all. Is there any clarification that someone can give me on this issue? Thank you in advance!

    Read the article

  • Android, NetworkInfo.getTypeName(), NullpointerException

    - by moppel
    I have an activity which shows some List entries. When I click on a list item my app checks which connection type is available ("WIF" or "MOBILE"), through NetworkInfo.getTypeName(). As soon as I call this method I get a NullpointerException. Why? I tested this on the emulator, cause my phone is currently not available (it's broken...). I assume this is the problem? This is the only explanation that I have, if that's not the case I have no idea why this would be null. Here's some code snippet: public class VideoList extends ListActivity{ ... public void onCreate(Bundle bundle){ final ConnectivityManager cm = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); ... listview.setOnItemClickListener(new OnItemClickListener(){ public void onItemClick(AdapterView<?> parent, View view, int position, long id) { ... NetworkInfo ni = cm.getActiveNetworkInfo(); String connex = ni.getTypeName(); //Nullpointer exception here if(connex.equals("WIFI")doSomething(); } }); } }

    Read the article

  • Case class copy() method abstraction.

    - by Joa Ebert
    I would like to know if it is possible to abstract the copy method of case classes. Basically I have something like sealed trait Op and then something like case class Push(value: Int) extends Op and case class Pop() extends Op. The first problem: A case class without arguments/members does not define a copy method. You can try this in the REPL. scala> case class Foo() defined class Foo scala> Foo().copy() <console>:8: error: value copy is not a member of Foo Foo().copy() ^ scala> case class Foo(x: Int) defined class Foo scala> Foo(0).copy() res1: Foo = Foo(0) Is there a reason why the compiler makes this exception? I think it is rather unituitive and I would expect every case class to define a copy method. The second problem: I have a method def ops: List[Op] and I would like to copy all ops like ops map { _.copy() }. How would I define the copy method in the Op trait? I get a "too many arguments" error if I say def copy(): this.type. However, since all copy() methods have only optional arguments: why is this incorrect? And, how do I do that correct? By making another method named def clone(): this.type and write everywhere def clone() = copy() for all the case classes? I hope not.

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • Threads in Java

    - by owca
    I've created simple program to test Threads in Java. I'd like it to print me numbers infinitely, like 123123123123123. Dunno why, but currently it stops after one cycle finishing 213 only. Anyone knows why ? public class Main { int number; public Main(int number){ } public static void main(String[] args) { new Infinite(2).start(); new Infinite(1).start(); new Infinite(3).start(); } } class Infinite extends Thread { static int which=1; static int order=1; int id; int number; Object console = new Object(); public Infinite(int number){ id = which; which++; this.number = number; } @Override public void run(){ while(1==1){ synchronized(console){ if(order == id){ System.out.print(number); order++; if(order >= which){ order = 1; } try{ console.notifyAll(); console.wait(); } catch(Exception e) {} } else { try{ console.notifyAll(); console.wait(); } catch(Exception e) {} } } try{Thread.sleep(0);} catch(Exception e) {} } } }

    Read the article

  • Ruby - calling constructor without arguments & removal of new line characters

    - by Raj
    I am a newbie at Ruby, I have written down a sample program. I dont understand the following: Why constructor without any arguments are not called in Ruby? How do we access the class variable outside the class' definition? Why does it always append newline characters at the end of the string? How do we strip it? Code: class Employee attr_reader :empid attr_writer :empid attr_writer :name def name return @name.upcase end attr_accessor :salary @@employeeCount = 0 def initiaze() @@employeeCount += 1 puts ("Initialize called!") end def getCount return @@employeeCount end end anEmp = Employee.new print ("Enter new employee name: ") anEmp.name = gets() print ("Enter #{anEmp.name}'s employee ID: ") anEmp.empid = gets() print ("Enter salary for #{anEmp.name}: ") anEmp.salary = gets() theEmpName = anEmp.name.split.join("\n") theEmpID = anEmp.empid.split.join("\n") theEmpSalary = anEmp.salary.split.join("\n") anEmp = Employee.new() anEmp = Employee.new() theCount = anEmp.getCount puts ("New employee #{theEmpName} with employee ID #{theEmpID} has been enrolled, welcome to hell! You have been paid as low as $ #{theEmpSalary}") puts ("Total number of employees created = #{theCount}") Output: Enter new employee name: Lionel Messi Enter LIONEL MESSI 's employee ID: 10 Enter salary for LIONEL MESSI : 10000000 New employee LIONEL MESSI with employee ID 10 has been enrolled, welcome to hell! You have been paid as low as $ 10000000 Total number of employees created = 0 Thanks

    Read the article

  • Run JavaScript code at ASP.NET page load

    - by vaibhav
    I have a radiobox <asp:RadioButtonList CssClass="list" Style="width: 150px" ID="rdo_RSD_ExcerciseRoT" runat="server" Font-Bold="false" RepeatDirection="Horizontal" RepeatLayout="Table" TextAlign="Left" > <asp:ListItem Text="Yes" onclick="en();" Value="Y"></asp:ListItem> <asp:ListItem Text="No" onclick="dis();" Value="N" Selected="True"></asp:ListItem> </asp:RadioButtonList> AS you may see second listitem is selected by default. But issue is, when my page is getting load dis() is not getting called. I want to run dis() on page load too. I tried google, some blogs suggest the use of Page.RegisterStartupScript Method. But I dont exactly know what is the problem and why we should use this above mentioned method. I would appreciate if someone please tell me why this function is not getting called and how to call it. Edit: I am giving Javascript code also, if it helps. <script type="text/javascript"> function dis() { ValidatorEnable(document.getElementById('<%=RequiredFieldValidator32.ClientID%>'), false); } function en() { ValidatorEnable(document.getElementById('<%=RequiredFieldValidator32.ClientID%>'), true); } </script>

    Read the article

  • Very slow guards in my monadic random implementation (haskell)

    - by danpriduha
    Hi! I was tried to write one random number generator implementation, based on number class. I also add there Monad and MonadPlus instance. What mean "MonadPlus" and why I add this instance? Because of I want to use guards like here: -- test.hs -- import RandomMonad import Control.Monad import System.Random x = Rand (randomR (1 ::Integer, 3)) ::Rand StdGen Integer y = do a <-x guard (a /=2) guard (a /=1) return a here comes RandomMonad.hs file contents: -- RandomMonad.hs -- module RandomMonad where import Control.Monad import System.Random import Data.List data RandomGen g => Rand g a = Rand (g ->(a,g)) | RandZero instance (Show g, RandomGen g) => Monad (Rand g) where return x = Rand (\g ->(x,g)) (RandZero)>>= _ = RandZero (Rand argTransformer)>>=(parametricRandom) = Rand funTransformer where funTransformer g | isZero x = funTransformer g1 | otherwise = (getRandom x g1,getGen x g1) where x = parametricRandom val (val,g1) = argTransformer g isZero RandZero = True isZero _ = False instance (Show g, RandomGen g) => MonadPlus (Rand g) where mzero = RandZero RandZero `mplus` x = x x `mplus` RandZero = x x `mplus` y = x getRandom :: RandomGen g => Rand g a ->g ->a getRandom (Rand f) g = (fst (f g)) getGen :: RandomGen g => Rand g a ->g -> g getGen (Rand f) g = snd (f g) when I run ghci interpreter, and give following command getRandom y (mkStdGen 2000000000) I can see memory overflow on my computer (1G). It's not expected, and if I delete one guard, it works very fast. Why in this case it works too slow? What I do wrong?

    Read the article

  • CoInitialize fails in dll

    - by Quandary
    Question: I have the following program, which uses COM to use the Microsoft Speech API (SAPI) to take a text and output it as sound. Now it works fine as long as I have it in a .exe. When I load it as .dll, it fails. Why? I used dependencywalker, and saw the exe doesn't have MSVCR100D and ole32, so I loaded them like this: LoadLibraryA("MSVCR100D.DLL"); LoadLibraryA("ole32.dll"); but it didn't help... Any idea why ? #include <windows.h> #include <sapi.h> #include <cstdlib> int main(int argc, char* argv[]) { ISpVoice * pVoice = NULL; if (FAILED(::CoInitialize(NULL))) return FALSE; HRESULT hr = CoCreateInstance(CLSID_SpVoice, NULL, CLSCTX_ALL, IID_ISpVoice, (void **) &pVoice); if( SUCCEEDED( hr ) ) { hr = pVoice->Speak(L"Noobie was fragged by GSG9 Googlebot", 0, NULL); hr = pVoice->Speak(L"Test Test", 0, NULL); hr = pVoice->Speak(L"This sounds normal <pitch middle = '-10'/> but the pitch drops half way through", SPF_IS_XML, NULL ); pVoice->Release(); pVoice = NULL; } ::CoUninitialize(); return EXIT_SUCCESS; }

    Read the article

  • Interpreting java.lang.NoSuchMethodError message

    - by Doog
    I get the following runtime error message (along with the first line of the stack trace, which points to line 94). I'm trying to figure out why it says no such method exists. java.lang.NoSuchMethodError: com.sun.tools.doclets.formats.html.SubWriterHolderWriter.printDocLinkForMenu( ILcom/sun/javadoc/ClassDoc;Lcom/sun/javadoc/MemberDoc; Ljava/lang/String;Z)Ljava/lang/String; at com.sun.tools.doclets.formats.html.AbstractExecutableMemberWriter.writeSummaryLink( AbstractExecutableMemberWriter.java:94) Line 94 of writeSummaryLink is shown below. QUESTIONS What does "ILcom" or "Z" mean? Why there are four types in parentheses (ILcom/sun/javadoc/ClassDoc;Lcom/sun/javadoc/MemberDoc;Ljava/lang/String;Z) and one after the parentheses Ljava/lang/String; when the method printDocLinkForMenu clearly has five parameters? CODE DETAIL The writeSummaryLink method is: protected void writeSummaryLink(int context, ClassDoc cd, ProgramElementDoc member) { ExecutableMemberDoc emd = (ExecutableMemberDoc)member; String name = emd.name(); writer.strong(); writer.printDocLinkForMenu(context, cd, (MemberDoc) emd, name, false); // 94 writer.strongEnd(); writer.displayLength = name.length(); writeParameters(emd, false); } Here's the method line 94 is calling: public void printDocLinkForMenu(int context, ClassDoc classDoc, MemberDoc doc, String label, boolean strong) { String docLink = getDocLink(context, classDoc, doc, label, strong); print(deleteParameterAnchors(docLink)); }

    Read the article

  • Correct Delphi compiler switches to stop in the user's code, not my component's

    - by Jeremy Mullin
    I'm modifying our VCL components so the end user's application links to our dcu files, instead of building our source code each time. We have everything working, but I want the debugger to stop on the user's code when an exception is raised. At first it would stop in our dcu and open the CPU window. I was able to prevent that by removing debug info from the dcu files. But now it still doesn't stop in the users code (like DevExpress libraries and others do). The following screencast is a short example. The first time I cause an exception in the DevExpress code, and the debugger correctly stops in my button event. The second time I cause an exception in my components, but the debugger doesn't have my button event on the call stack, and doesn't show me where the problem was. Any ideas why? http://screencast.com/t/NjhlOTRk Currently building the DCU's with these options: -$W+ -$D- -h -w -q Update: The TDataSet methods in between my component and the button event seem to cause this behavior. If I instead call a direct method of my table, I get the expected behavior. I'm guessing there isn't anything I can do about this, but I'm still curious why it happens.

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • Perl's use encoding pragma breaking UTF strings

    - by Karel Bílek
    I have a problem with Perl and Encoding pragma. (I use utf-8 everywhere, in input, output, the perl scripts themselves. I don't want to use other encoding, never ever.) However. When I write binmode(STDOUT, ':utf8'); use utf8; $r = "\x{ed}"; print $r; I see the string "í" (which is what I want - and what is 00+ED unicode char). But when I add the "use encoding" pragma like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; print $r; all I see is a box character. Why? Moreover, when I add Data::Dumper and let the Dumper print the new string like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; use Data::Dumper; print Dumper($r); I see that perl changed the string to "\x{fffd}". Why?

    Read the article

  • Socket.Recieve Failing When Multithreaded

    - by Qua
    The following piece of code runs fine when parallelized to 4-5 threads, but starts to fail as the number of threads increase somewhere beyond 10 concurrentthreads int totalRecieved = 0; int recieved; StringBuilder contentSB = new StringBuilder(4000); while ((recieved = socket.Receive(buffer, SocketFlags.None)) > 0) { contentSB.Append(Encoding.ASCII.GetString(buffer, 0, recieved)); totalRecieved += recieved; } The Recieve method returns with zero bytes read, and if I continue calling the recieve method then I eventually get a 'An established connection was aborted by the software in your host machine'-exception. So I'm assuming that the host actually sent data and then closed the connection, but for some reason I never recieved it. I'm curious as to why this problem arises when there are a lot of threads. I'm thinking it must have something to do with the fact that each thread doesn't get as much execution time and therefore there are some idle time for the threads which causes this error. Just can't figure out why idle time would cause the socket not to recieve any data.

    Read the article

  • New projects not built when target platform is set explicitly

    - by stiank81
    I create a new solution with one project, and then change the target platform from "Any CPU" to "x86". After this new projects added doesn't get built by default, and their target platform doesn't follow the global settings. Why?! Looking at the configuration manager new projects added are not checked to "Build", and they get target platform "Any CPU" instead of the globally set x86. Why is this happening? I expect new projects too to get the globally set and defined x86 target platform.. Some things I've tried: Toggle global platform back to Any CPU, and then to x86 again. No change.. Choosing platform explicitly for the new project. x86 is not available in the list, and when I say <New..> and try adding it I'm not allowed as ".. a solution platform with the same name already exists.". On the build properties for the new project I can't change the platform in the Configuration section, but I can set "Platform target" to x86 in the General section. It is however not clear whether this actually makes a difference, and it wouldn't respond if I change the target platform globally later. Initially I thought this was a problem from converting my solution from VS2008 to VS2010, but the problem applies both places. I.e. when I create a solution in VS2008 and just stay in VS2008 I still get the problem.

    Read the article

  • Ruby on Rails bizarre behavior with ActiveRecord error handling

    - by randombits
    Can anyone explain why this happens? mybox:$ ruby script/console Loading development environment (Rails 2.3.5) >> foo = Foo.new => #<Foo id: nil, customer_id: nil, created_at: nil, updated_at: nil> >> bar = Bar.new => #<Bar id: nil, bundle_id: nil, alias: nil, real: nil, active: true, list_type: 0, body_record_active: false, created_at: nil, updated_at: nil> >> bar.save => false >> bar.errors.each_full { |msg| puts msg } Real can't be blank Real You must supply a valid email => ["Real can't be blank", "Real You must supply a valid email"] So far that is perfect, that is what i want the error message to read. Now for more: >> foo.bars << bar => [#<Bar id: nil, bundle_id: nil, alias: nil, real: nil, active: true, list_type: 0, body_record_active: false, created_at: nil, updated_at: nil>] >> foo.save => false >> foo.errors.to_xml => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<errors>\n <error>Bars is invalid</error>\n</errors>\n" That is what I can't figure out. Why am I getting Bars is invalid versus the error messages displayed above, ["Real can't be blank", "Real you must supply a valid email"] etc. My controller simply has a respond_to method with the following in it: format.xml { render :xml => @foo.errors, :status => :unprocessable_entity } How do I have this output the real error messages so the user has some insight into what they did wrong?

    Read the article

  • Can't draw UImage in UIView::drawRect

    - by Joel
    I know this seems like a simple task, which is why I don't understand why I can't get the image to render. When I set up my UIView, I do the following: myUiView.backgroundColor = [UIColor clearColor]; myUiView.opaque = NO; I create and retain the UIImage in the init function of my UIView: image = [[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"test" ofType:@"png"]] retain]; then my drawRect looks like this: - (void) drawRect:(CGRect) rect { [image drawInRect:self.bounds]; } Ultimately I'll be manipulating that UIImage via bitmap context, and then in drawRect create a CGImage out of the context, and render that, but for now I'm just trying to get it rendering a known image. I've been digging through this site, as well as the documentation. I've gone down the CG path and tried drawing it with CGContextDrawImage by following the numerous examples other people have posted, but that didn't work either. So I've come back to what seems to be the most straightforward way to draw an image, but it isn't working. Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • Conversion failed when converting the varchar value to int

    - by onedaywhen
    Microsoft SQL Server 2008 (SP1), getting an unexpected 'Conversion failed' error. Not quite sure how to describe this problem, so below is a simple example. The CTE extracts the numeric portion of certain IDs using a search condition to ensure a numeric portion actually exists. The CTE is then used to find the lowest unused sequence number (kind of): CREATE TABLE IDs (ID CHAR(3) NOT NULL UNIQUE); INSERT INTO IDs (ID) VALUES ('A01'), ('A02'), ('A04'), ('ERR'); WITH ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); The error is, 'Conversion failed when converting the varchar value 'RR' to data type int.' I can't understand why the value ID = 'ERR' should be being considered for conversion because the predicate ID LIKE 'A[0-9][0-9]' should have removed the invalid row from the resultset. When the base table is substituted with an equivalent CTE the problem goes away i.e. WITH IDs (ID) AS ( SELECT 'A01' UNION ALL SELECT 'A02' UNION ALL SELECT 'A04' UNION ALL SELECT 'ERR' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); Why would a base table cause this error? Is this a known issue? UPDATE @sgmoore: no, doing the filtering in one CTE and the casting in another CTE still results in the same error e.g. WITH FilteredIDs (ID) AS ( SELECT ID FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM FilteredIDs ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 );

    Read the article

  • PHP's openssl_sign generates different signature than SSCrypto's sign

    - by pascalj
    I'm writing an OS X client for a software that is written in PHP. This software uses a simple RPC interface to receive and execute commands. The RPC client has to sign the commands he sends to ensure that no MITM can modify any of them. However, as the server was not accepting the signatures I sent from my OS X client, I started investigating and found out that PHP's openssl_sign function generates a different signature for a given private key/data combination than the Objective-C SSCrypto framework (which is only a wrapper for the openssl lib): SSCrypto *crypto = [[SSCrypto alloc] initWithPrivateKey:self.localPrivKey]; NSData *shaed = [self sha1:@"hello"]; [crypto setClearTextWithData:shaed]; NSData *data = [crypto sign]; generates a signature like CtbkSxvqNZ+mAN... The PHP code openssl_sign("hello", $signature, $privateKey); generates a signature like 6u0d2qjFiMbZ+... (For my certain key, of course. base64 encoded) I'm not quite shure why this is happening and I unsuccessfully experimented with different hash-algorithms. As the PHP documentation states SHA1 is used by default. So why do these two functions generate different signatures and how can I get my Objective-C part to generate a signature that PHPs openssl_verify will accept? Note: I double checked that the keys and the data is correct!

    Read the article

  • C++0x rvalue references and temporaries

    - by Doug
    (I asked a variation of this question on comp.std.c++ but didn't get an answer.) Why does the call to f(arg) in this code call the const ref overload of f? void f(const std::string &); //less efficient void f(std::string &&); //more efficient void g(const char * arg) { f(arg); } My intuition says that the f(string &&) overload should be chosen, because arg needs to be converted to a temporary no matter what, and the temporary matches the rvalue reference better than the lvalue reference. This is not what happens in GCC and MSVC. In at least G++ and MSVC, any lvalue does not bind to an rvalue reference argument, even if there is an intermediate temporary created. Indeed, if the const ref overload isn't present, the compilers diagnose an error. However, writing f(arg + 0) or f(std::string(arg)) does choose the rvalue reference overload as you would expect. From my reading of the C++0x standard, it seems like the implicit conversion of a const char * to a string should be considered when considering if f(string &&) is viable, just as when passing a const lvalue ref arguments. Section 13.3 (overload resolution) doesn't differentiate between rvalue refs and const references in too many places. Also, it seems that the rule that prevents lvalues from binding to rvalue references (13.3.3.1.4/3) shouldn't apply if there's an intermediate temporary - after all, it's perfectly safe to move from the temporary. Is this: Me misreading/misunderstand the standard, where the implemented behavior is the intended behavior, and there's some good reason why my example should behave the way it does? A mistake that the compiler vendors have somehow all made? Or a mistake based on common implementation strategies? Or a mistake in e.g. GCC (where this lvalue/rvalue reference binding rule was first implemented), that was copied by other vendors? A defect in the standard, or an unintended consequence, or something that should be clarified?

    Read the article

  • IE8 claims my page has an error, firefox doesn't, and I can't find any error. Help!

    - by Bears will eat you
    This is something of a follow-up question to my question here. You can find the HTML source in a text file here. When I load that page in IE8, I get the "Done, but with errors on page." message in my status bar. The detail view shows Expected identifier sms Line: 147 Code: 0 Char: 67 and I see absolutely no problems anywhere near there. In IE8, the page is still behaving erratically w/r/t the randomly losing focus as mentioned in my other question. When I load the same exact page in Firefox (using Firebug) the console shows no errors and the page works perfectly. Any thoughts on what's going on here? This is driving me nuts and making me want to give up on even trying to write an IE friendly page. Edit: Thanks for all the comments! This page is written as a JSP, so I edit in Eclipse. I found an Eclipse warning about the onblur event for the username field. I switched it from onblur="alert(document.activeElement + ' class:' + document.activeElement.class)" to onblur="alert(document.activeElement)" and that made the bizarre IE page error vanish. I had been trying to give more info (namely, its CSS class) about specifically which element is stealing focus - to my own detriment, apparently, since Javascript was interpreting the '.class' part in the Java(script) sense. And, no, the page doesn't validate. But the errors were mostly/all ones that just didn't make sense, such as Line 14, Column 41: Attribute "LANGUAGE" is not a valid attribute. Did you mean "language"? to which I say, WTF?! But I'm still stuck trying to figure out why, as I enter text in the username & password fields, focus randomly switches to a div (working on figuring out which div currently). Edit 2: It's the div between the two "global nav" comments, at the very top of the body. Still no idea why it's happening, though.

    Read the article

  • SQLite dataypes lengths?

    - by XF
    I'm completely new to SQLite (actually 5 minutes ago), but I do know somewhat the Oracle and MySql backends. The question: I'm trying to know the lengths of each of the datatypes supported by SQLite, such as the differences between a bigint and a smallint. I've searched across the SQLite documentation (only talks about affinity, only matters it?), SO threads, google... and found nothing. My guess: I've just slightly revised the SQL92 specifications, which talk about datatypes and its relations but not about its lengths, which is quite obvious I assume. Yet I've come accross the Oracle and MySql datatypes specs, and the specified lengths are mostly identical for integers at least. Should I assume SQLite is using the same lengths? Aside question: Have I missed something about the SQLite docs? Or have I missed something about SQL in general? Asking this because I can't really understand why the SQLite docs don't specify something as basic as the datatypes lengths. It just doesn't make sense to me! Although I'm sure there is a simple command to discover the lengths.. but why not writing them to the docs? Thank you!

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >