Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 538/1579 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • INSERT INTO sql server error : invalid object name

    - by thormayer
    I have a problem with some statement on SQL SERVER the error I get is that I have an invalid object name 'TBL_VIDEOS' INSERT INTO TBL_VIDEOS ( TBL_VIDEOS.ID, TBL_VIDEOS.TITLE, TBL_VIDEOS.V_DESCRIPTION, TBL_VIDEOS.UPLOAD_DATE, TBL_VIDEOS.V_VIEWS, TBL_VIDEOS.USERNAME, TBL_VIDEOS.RATING, TBL_VIDEOS.V_SOURCE, TBL_VIDEOS.FLAG ) VALUES ('Z8MTRH3LmTVm', 'Why Creativity is the New Economy', 'Dr Richard Florida, one of the world&#39;s leading experts on economic competitiveness, demographic trends and cultural and technological innovation shows how developing the full human and creative capabilities of each individual, combined with institutional supports such as commercial innovation and new industry, will put us back on the path to economic and social prosperity. Listen to the podcast of the full event including audience Q&amp;A: http://www.thersa.org/events/audio-and-past-events/2012/why-creativity-is-the-new-economy Our events are made possible with the support of our Fellowship. Support us by donating or applying to become a Fellow. Donate: http://www.thersa.org/support-the-rsa Become a Fellow: http://www.thersa.org/fellowship/apply', CURRENT_TIMESTAMP, 0, 1, 0, 'http://www.youtube.com/watch?v=VPX7gowr2vE&feature=g-all-u' ,0) and I wonder what i've done wrong ? (btw, the error refer to line 1.. guess its the table name.. but it correct!

    Read the article

  • Passing Object to Service in WCF

    - by hgulyan
    Hi, I have my custom class Customer with its properties. I added DataContract mark above the class and DataMember to properties and it was working fine, but I'm calling a service class's function, passing customer instance as parameter and some of my properties get 0 values. While debugging I can see my properties values and after it gets to the function, some properties' values are 0. Why it can be so? There's no code between this two actions. DataContract mark workes fine, everything's ok. Any suggestions on this issue? I tried to change ByRef to ByVal, but it doesn't change anything. Why it would pass other values right and some of integer types just 0? Maybe the answer is simple, but I can't figure it out. Thank You. <DataContract()> Public Class Customer Private Type_of_clientField As Integer = -1 <DataMember(Order:=1)> Public Property type_of_client() As Integer Get Return Type_of_clientField End Get Set(ByVal value As Integer) Type_of_clientField = value End Set End Property End Class <ServiceContract(SessionMode:=SessionMode.Allowed)> <DataContractFormat()> Public Interface CustomerService <OperationContract()> Function addCustomer(ByRef customer As Customer) As Long End Interface type_of_client properties value is 6 before I call addCustomer function. After it enters that function the value is 0. UPDATE: The issue is in instance creating. When I create an instance of a class on client side, that is stored on service side, some of my properties pass 0 or nothing, but when I call a function of a service class, that returns a new instance of that class, it works fine. What's is the difference? Could that be serialization issue?

    Read the article

  • SSH traffic over openvpn freezes under weird circumstances

    - by user289581
    I have an openvpn (version 2.1_rc15 at both ends) connection setup between two gentoo boxes using shared keys. it works fine for the most part. I use mysql, http, ftp, scp over the vpn with no problems. But when I ssh from the client to the server over the vpn, weird things happen. I can login, i can execute some commands. But if i try to run an ncurses application like top, or i try to cat a file, the connection will stall and I'll have to sever the ssh session. I can, for example, execute "echo blah; echo .; echo blah" and it will output the three lines of text over the ssh session fine. But if i execute "cat /etc/motd" the session will freeze the moment I press enter. While it seems like a terminal emulation problem it makes no sense why using the vpn would affect the ability for ssh to render things correctly. I am at a loss to explain why everything else works, including scp, but ssh just breaks over the vpn. Any thoughts ?

    Read the article

  • A question on delegates and method parameters

    - by Srinivas Reddy Thatiparthy
    public class Program { delegate void Srini(string param); static void Main(string[] args) { Srini sr = new Srini(PrintHello1); sr += new Srini(PrintHello2); //case 2: sr += new Srini(delegate(string o) { Console.WriteLine(o); }); sr += new Srini(delegate(object o) { Console.WriteLine(o.ToString()); }); //case 4: sr += new Srini(delegate { Console.WriteLine(“This line is accepted,though the method signature is not Comp”); });//case 5 sr("Hello World"); Console.Read(); } static void PrintHello1(string param) { Console.WriteLine(param); } static void PrintHello2(object param) { Console.WriteLine(param); } } Compiler doesn't complain about the case 2(see the comment),well,the reason is straight forward since string inherits from object. ,along the same lines ,Why is it complaining for anonymous method types(see the comment //case 4:) that “Cannot convert anonymous method to delegate type 'DelegateTest.Program.Srini' because the parameter types do not match the delegate parameter types” where as in case of normal method it doesn't ?or am i comparing apples with oranges? Another case is why is it accepting anonymous method without parameters?

    Read the article

  • ASMX Web Service online works when all of the code is in one file without code-behind

    - by Ben McCormack
    I have an ASMX Web Service that has its code entirely in a code-behind file, so that the entire contents of the .asmx file is: <%@ WebService Language="C#" CodeBehind="~/App_Code/AddressValidation.cs" Class="AddressValidation" %> On my test machine (Windows XP with IIS 5), I set up a virtual directory just for this ASP.NET 2.0 solution and everything works great. All my code is separated nicely and it just works. However, when we deployed this solution to our Windows Server 2003 development environment, we noticed that the code only compiled when all of the code was dropped directly into the .asmx file, meaning that the solution didn't work with code-behind. We can't figure out why this is happening. One thing that's different about our setup in our development environment is that instead of creating a separate virual directory just for this solution, we dropped it into an existing directory that runs a classic ASP application. So here we have a folder with an ASP.NET 2.0 application within a directory that contains a classic ASP application. Granted, everything in the ASP.NET 2.0 application works if all of the code is within the .asmx file and not in code-behind, but we'd really like to know why it's not recognizing the code-behind files and compiling it correctly.

    Read the article

  • Cannot convert object, recieved from ajax call, into a long

    - by Matt
    I'm using Asp.Net-Mvc, I have this method in my controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult LinkAccount(string site, object id) { return this.Json(id); } Here's the ajax method that calls it: $.post("/Account/LinkAccount", { site: "Facebook", id: FB.Facebook.apiClient.get_session().uid }, function(result) { alert(result); }, "json" ); returning this.Json(id); makes the alert work... it alerts 7128383 (something similar to that). but if I change this.Json(id) to this.Json(Conver.ToInt64(id)); the alert does not fire... Any idea of why I can't convert an object received from an object to a long? I already know changing the LinkAccount method to accept a long instead works just fine. It's just I need it as an object because some other sites I'm linking up have strings for id's rather than longs. UPDATE: I tried running the code on localhost so I could set a breakpoint. First I changed the line return this.Json(Convert.ToInt64(id)); to long idAsLong = Convert.ToInt64(id));. Here's what the debugger is telling me: When I hover over id it says: "id | {string[1]}" and when I press the plus button is shows: "[0] | '7128383'" When I hover over idAsLong, it says: "idAsLong | 0" Why isn't it converting it properly?

    Read the article

  • What's the equivalent of gcc's -mwindows option in cmake?

    - by Runner
    I'm following the tuto: http://zetcode.com/tutorials/gtktutorial/firstprograms/ It works but each time I double click on the executable,there is a console which I don't want it there. How do I get rid of that console? I tried this: add_executable(Cmd WIN32 cmd.c) But got this fatal error: MSVCRTD.lib(crtexew.obj) : error LNK2019: unresolved external symbol _WinMain@16 referenced in function ___tmainCRTStartup Cmd.exe : fatal error LNK1120: 1 unresolved externals While using gcc directly works: gcc -o Cmd cmd.c -mwindows .. I'm guessing it has something to do with the entry function: int main( int argc, char *argv[]),but why gcc works? How can I make it work with cmake? UPDATE Let me paste the source code here for convenience: #include <gtk/gtk.h> int main( int argc, char *argv[]) { GtkWidget *window; gtk_init(&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_widget_show(window); gtk_main(); return 0; } UPDATE2 Why gcc -mwindows works but add_executable(Cmd WIN32 cmd.c) not? Maybe that's not the equivalent for -mwindows in cmake?

    Read the article

  • Subclassing UIButton but can't access my properties

    - by Ross Ellerington
    Hi, I've created a sub class of UIButton: // // DetailButton.h #import <Foundation/Foundation.h> #import <MapKit/MapKit.h> @interface MyDetailButton : UIButton { NSObject *annotation; } @property (nonatomic, retain) NSObject *annotation; @end // // DetailButton.m // #import "MyDetailButton.h" @implementation MyDetailButton @synthesize annotation; @end I figured that I can then create this object and set the annotation object by doing the following: MyDetailButton* rightButton = [MyDetailButton buttonWithType:UIButtonTypeDetailDisclosure]; rightButton.annotation = localAnnotation; localAnnotation is an NSObject but it is really an MKAnnotation. I can't see why this doesn't work but at runtime I get this error: 2010-05-27 10:37:29.214 DonorMapProto1[5241:207] *** -[UIButton annotation]: unrecognized selector sent to instance 0x445a190 2010-05-27 10:37:29.215 DonorMapProto1[5241:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UIButton annotation]: unrecognized selector sent to instance 0x445a190' ' I can't see why it's even looking at UIButton because I've subclassed that so it should be looking at the MyDetailButton class to set that annotation property. Have I missed something really obvious. It feels like it :) Thanks in advance for any help you can provide Ross

    Read the article

  • Trouble getting $.ajax() to work in PhoneGap against a locally hosted server

    - by David Gutierrez
    Currently trying to make an ajax post request to an IIS Express hosted MVC 4 Web API end point from an android VM (Bluestacks) on my machine. Here are the snippets of code that I am trying, and cannot get to work: $.ajax({ type: "POST", url: "http://10.0.2.2:28434/api/devices", data: {'EncryptedPassword':'1234','UserName':'test','DeviceToken':'d234'} }).always(function( data, textStatus, jqXHR ) { alert( textStatus ); }); Whenever I run this request I always get back a textStatus of 'error'. After hours of trying different things, I pushed my End Point to an actual server, and was able to actually get responses back in PhoneGap if I built up an XMLHttpRequest by hand, like so: var request = new XMLHttpRequest(); request.open("POST", "http://172.16.100.42/MobileRewards/api/devices", true); request.onreadystatechange = function(){//Call a function when the state changes. console.log("state = " + request.readyState); console.log("status = " + request.status); if (request.readyState == 4) { if (request.status == 200 || request.status == 0) { console.log("*" + request.responseText + "*"); } } } request.send("{EncryptedPassword:1234,UserName:test,DeviceToken:d234}"); Unfortunately, if I try to use $.ajax() against the same end point in the snippet above I still get a status text that says 'error', here is that snippet for reference: $.ajax({ type: "POST", url: "http://172.16.100.42/MobileRewards/api/devices", data: {'EncryptedPassword':'1234','UserName':'test','DeviceToken':'d234'} }).always(function( data, textStatus, jqXHR ) { alert( textStatus ); }); So really, there are a couple of questions here. 1) Why can't I get any ajax calls (post or get) to successfully hit my End Point when it's hosted via IIS Express on the same machine that the Android VM is running? 2) When my end point is hosted on an actual server, through IIS and served through port 80, why can't I get post requests to be successful when I use jquery's ajax calls? (Even though I can get it to work by manually creating an XMLHttpRequest) Thanks

    Read the article

  • When should I be cautious using data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

  • Which key value store is the most promising/stable?

    - by Mike Trpcic
    I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)

    Read the article

  • Using a UITableViewController with a small-sized table?

    - by rpj
    When using a UITableViewController, the initWithStyle: method automatically creates the underlying UITableView with - according to the documentation - "the correct dimensions". My problem is that these "correct dimensions" seem 320x460 (the iPhone's screen size), but I'm pushing this TableView/Controller pair into a UINavigationController which is itself contained in a UIView, which itself is about half the height of the screen. No frame or bounds wrangling I can come up with seems to correctly reset the table's size, and as such it's "too long", meaning there are a collection of rows that are pushed off the bottom of the screen and are not visible nor reachable by scrolling. So my question comes down to: what is the proper way to tell a UITableViewController to resize its component UITableView to a specified rectangle? Thanks! Update I've tried all the techniques suggested here to no avail, but I did find one interesting thing: if I eschew the UINavigationController altogether (which I'm not yet willing to do for production, but as an experiment), and add the table view as a direct subview of the enclosing view I mentioned, the frame size given is respected. The very moment I re-introduce the UINavigationController into the mix, no matter if it is added as a subview before or after the table view, and no matter if alloc/init it before or after the table view is added as a subview, the result is the same as it was before. I'm beginning to suspect UINavigationController isn't much of a team player... Update 2 The suggestion to check frame size after the table view on screen was a good one: turns out that the navigation controller is in fact resizing it some time in between load and display. My solution, hacky at best, has been to cache the frame given on load and to reset it if changed at the beginning of tableView:cellForRowAtIndexPath:. Why there you ask? Because it's the one place I found that worked, that's why! I don't consider this a solution as it's obviously improper, but for the benefit of anyone else reading, it does seem to work.

    Read the article

  • Inode to device information

    - by Methos
    I have 3 questions: I want to figure out if a file belongs to a USB device given the file inode. By looking in the latest kernel sources (2.6.33) on LXR, I think one can find that information by following pointers as follows: inode-super_block-block_device-backing_dev_info-device-device_driver(or device_type). However, the kernel that I am working with - 2.6.22.14 - does not have struct device pointer in the backing_dev_info object. So how can I figure out to which device does a file belong to from just the inode? I see that each of the inode, super_block and block_device contain an object of type 'dev_t'. But even after searching a lot, I could not find out how to convert 'dev_t' into struct device *. Is there any way to get that infomation? I tried to print device major and minor numbers using imajor(inode) and iminor(inode). However, for every file - belonging to hdd or usb - it always prints major and minor number as zero. Why would that be happening? I searched online for USB major numbers and I found out that major number for a USB is 180. However, on multiple machines, it showed me the major number associated with the USB dev as 253. $ ls -ltr /dev/usb* crw-rw---- 1 root root 253, 4 2010-04-13 17:20 /dev/usbmon4 crw-rw---- 1 root root 253, 3 2010-04-13 17:20 /dev/usbmon3 crw-rw---- 1 root root 253, 8 2010-04-13 17:20 /dev/usbmon8 crw-rw---- 1 root root 253, 5 2010-04-13 17:20 /dev/usbmon5 crw-rw---- 1 root root 253, 1 2010-04-13 17:20 /dev/usbmon1 crw-rw---- 1 root root 253, 7 2010-04-13 17:20 /dev/usbmon7 Why is that so?

    Read the article

  • iphone crash log with dSym not loading debug information

    - by AngeDeLaMort
    Hello, I was trying to see why my application crashed on the device (iPhone) using the dSym generated along the executable (in ad hoc), but I don't know why, there isn't any useful information. It seems that "Organizer" is able to find the appropriate dSym and translate some data into more readable one, but when it comes to my application, I just have an address. Since I know how to reproduce it, I've tried to setup my build so it can help me in the future. So, I've tried to find if I had all the proper flags set int the project build properties and everything seems fine. So after doing some research, it seems that all information are stripped during link time and the dSym seems completely useless. I've played with some flags, but nothing changed. So, is there something special to do in order to get the crash file human readable? Or is it impossible in the ad hoc setting? The closest thing near to work that I've done was to build a debug version and look up the address in it. At least it seems to give the right file. So, I made a sample app and here what I have: (the line I want is #4): Thread 0 Crashed: 0 libobjc.A.dylib 0x00003ebc objc_msgSend + 20 1 UIKit 0x0005c970 -[UIView dealloc] + 60 2 UIKit 0x0005c840 -[UIImageView dealloc] + 76 3 CoreFoundation 0x0003963a -[NSObject release] + 28 4 MyApplication 0x000046a6 0x1000 + 13990 5 UIKit 0x00069750 -[UIViewController view] + 44 6 MyApplication 0x000053fa 0x1000 + 17402 The crash is made using 2 successive releases on an object. Thanks in advance.

    Read the article

  • "hour" int taken from NSDate not behaving as expected at midnight??

    - by Eric
    I feel like I've lost my mind. Can someone tell me what's going on here? Also, I'm sure there is a better way to do what I'm trying to do, but I'm not interested in that now. I'd just like to solve the mystery of why my ints are not responding to logic as expected. // Set "At: " field close to current time NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"HH"]; int hour = [[dateFormatter stringFromDate:[NSDate date]] intValue]; [dateFormatter setDateFormat:@"mm"]; int minute = [[dateFormatter stringFromDate:[NSDate date]] intValue]; NSLog(@"currently %i:%i",hour, minute); if(hour >= 12){ // convert to AM/PM selectedMeridiem = 1; if(hour != 12){ hour = hour - 12; } } else{ selectedMeridiem = 0; } selectedHour = hour - 1; if(selectedHour <= 0){ selectedHour = 11; } When I debug the above code with my clock set to 12:XX AM, the integer "hour" returned is 0. But then any if statements with the condition if(hour == 0) are not evaluated. Likewise, this would not be evaluated either: if(hour < 1). The code above puts the hour int into another int, selectedHour (don't worry about why I'm doing this for now), but selectedHour suffers from the same weird behavior; the if(selectedHour <= 0) line is never evaluated. Am I going crazy, or am I just an idiot? Maybe there's some behavior of 0 integers that I'm not aware of. All of my code runs fine as long as it's not 12:XX AM.

    Read the article

  • Please help us non-C++ developers understand what RAII is

    - by Charlie Flowers
    Another question I thought for sure would have been asked before, but I don't see it in the "Related Questions" list. Could you C++ developers please give us a good description of what RAII is, why it is important, and whether or not it might have any relevance to other languages? I do know a little bit. I believe it stands for "Resource Acquisition is Initialization". However, that name doesn't jive with my (possibly incorrect) understanding of what RAII is: I get the impression that RAII is a way of initializing objects on the stack such that, when those variables go out of scope, the destructors will automatically be called causing the resources to be cleaned up. So why isn't that called "using the stack to trigger cleanup" (UTSTTC:)? How do you get from there to "RAII"? And how can you make something on the stack that will cause the cleanup of something that lives on the heap? Also, are there cases where you can't use RAII? Do you ever find yourself wishing for garbage collection? At least a garbage collector you could use for some objects while letting others be managed? Thanks.

    Read the article

  • java applet stops running on exceptions

    - by Marius
    I've developed a simple applet that imports an image from the clipboard. When i run the class file from NetBeans, everything works fine. But when i try to run it as an applet ... it gives me lots of errors in the java console and does not run ... - The applet is signed - There is a static method in one class, called getImageFromClipboard(). When the applet runs, it calls this method. - getImageFromClipboard() method has a try-catch block and suppresses all errors. It simply returns either a BufferedImage or null. - When applet runs, it does some visual adjustments before calling getImageFromClipboard() Now the scenario is as follows: the class from netbeans runs, fails to import the image and adjusts the interface accordingly (displays an error in a label) But when i run it in a browser, java console is filled with errors and nothing after the getImageFromClipboard() line works. Although the applet itself loads and does everything it's supposed do do before importing the image. So why am i getting errors if i accept the certificate and all of the possible errors are in try-catch blocks? None of this code should throw any exceptions. Any ideas why this is happening? Or do you need to see the errors to tell? UPDATE I've managed to find out the problem myself. The class that i'm using is not in the jar file :( How do i add it in? I'm using "add jar folder" in netbeans on the libraries package to import it but it does not seem to get copied to the jar.

    Read the article

  • Cannot use await in Portable Class Library for Win 8 and Win Phone 8

    - by Harry Len
    I'm attempting to create a Portable Class Library in Visual Studio 2012 to be used for a Windows 8 Store app and a Windows Phone 8 app. I'm getting the following error: 'await' requires that the type 'Windows.Foundation.IAsyncOperation' have a suitable GetAwaiter method. Are you missing a using directive for 'System'? At this line of code: StorageFolder guidesInstallFolder = await Package.Current.InstalledLocation.GetFolderAsync(guidesFolder); My Portable Class Library is targeted at .NET Framework 4.5, Windows Phone 8 and .NET for Windows Store apps. I don't get this error for this line of code in a pure Windows Phone 8 project, and I don't get it in a Windows Store app either so I don't understand why it won't work in my PCL. The GetAwaiter is an extension method in the class WindowsRuntimeSystemExtensions which is in System.Runtime.WindowsRuntime.dll. Using the Object Browser I can see this dll is available in the .NET for Windows Store apps component set and in the Windows Phone 8 component set but not in the .NET Portable Subset. I just don't understand why it wouldn't be in the Portable Subset if it's available in both my targeted platforms.

    Read the article

  • What are the alternatives to public fields?

    - by James
    I am programming a game in java, and as the question title suggestions i am using public fields in my classes. (for the time being) From what i have seen public fields are bad and i have some understanding why. (but if someone could clarify why you should not use them, that would be appreciated) The thing is that also from what i have seen, (and it seems logical) is that using private fields, but using getters and setters to access them is also not good as it defeats the point of using private fields in the first place. So, my question is, what are the alternatives? or do i really have to use private fields with getters and setters? For reference here is one of my classes, and some of its methods. I will elaborate more if needs be. //The player's fields. public double health; public String name; public double goldCount; public double maxWeight; public double currentWeight; public double maxBackPckSlts; public double usedBackPckSlts; // The current back pack slots in use public double maxHealth; // Maximum amount of health public ArrayList<String> backPack = new ArrayList<String>(); //This method happens when ever the player dynamically takes damage(i.e. when it is not scripted for the player to take damage. //Parameters will be added to make it dynamic so the player can take any spread of damage. public void beDamaged(double damage) { this.health -= damage; if (this.health < 0) { this.health = 0; } } public void gainHealth(double gainedHp) { this.health += gainedHp; if (this.health > this.maxHealth) { this.health = this.maxHealth; } }

    Read the article

  • CSS ] how to automatically resize the wrapper div.

    - by Phrixus
    Hi, I've been struggling with this problem.. There is a wrapper div and it contains 3 vertical column divs with full of texts, and this wrapper div has red background color so that it can be a background of the entire texts. <div id="content_wrapper"> <div id="cside_a"> // massive texts goes here </div> ... // two more columns go here. </div> And here is the CSS code for them. #content_wrapper { background-color:#DB0A00; background-repeat:no-repeat; min-height:400px; } #cside_a, #cside_b, #cside_c { float: left; width: 33%; } And this code gives me a background that covers only 400px height box.. My expectation was the wrapper div automatically resizes depending on the size of the divs in it. Somehow putting "overflow:hidden" with wrapper CSS code makes everything work fine. I have no idea why "overflow:hidden" works.. shouldn't this hide all the overflowed texts..? Could anyone explain me why? Is is the correct way to do it anyway?

    Read the article

  • Win32 DLL importing issues (DllMain)

    - by brady
    I have a native DLL that is a plug-in to a different application (one that I have essentially zero control of). Everything works just great until I link with an additional .lib file (links my DLL to another DLL named ABQSMABasCoreUtils.dll). This file contains some additional API from the parent application that I would like to utilize. I haven't even written any code to use any of the functions exported but just linking in this new DLL is causing problems. Specifically I get the following error when I attempt to run the program: The application failed to initialize properly (0xc0000025). Clock on OK to terminate the application. I believe I have read somewhere that this is typically due to a DllMain function returning FALSE. Also, the following message is written to the standard output: ERROR: Memory allocation attempted before component initialization I am almost 100% sure this error message is coming from the application and is not some type of Windows error. Looking into this a little more (aka flailing around and flipping every switch I know of) I linked with /MAP turned on and found this in the resulting .map file: 0001:000af220 ??3@YAXPEAX@Z 00000001800b0220 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af226 ??2@YAPEAX_K@Z 00000001800b0226 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af22c ??_U@YAPEAX_K@Z 00000001800b022c f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af232 ??_V@YAXPEAX@Z 00000001800b0232 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll If I undecorate those names using "undname" they give the following (same order): void __cdecl operator delete(void * __ptr64) void * __ptr64 __cdecl operator new(unsigned __int64) void * __ptr64 __cdecl operator new[](unsigned __int64) void __cdecl operator delete[](void * __ptr64) I am not sure I understand how anything from ABQSMABasCoreUtils.dll can exist within this .map file or why my DLL is even attempting to load ABQSMABasCoreUtils.dll if I don't have any code that references this DLL. Can anyone help me put this information together and find out why this isn't working? For what it's worth I have confirmed via "dumpbin" that the parent application imports the same DLL (ABQSMABasCoreUtils.dll), so it is being loaded no matter what. I have also tried delay loading this DLL in my DLL but that did not change the results.

    Read the article

  • Entity Framework and differences between Contains between SQL and objects using ToLower

    - by John Ptacek
    I have run into an "issue" I am not quite sure I understand with Entity Framework. I am using Entity Framework 4 and have tried to utilize a TDD approach. As a result, I recently implemented a search feature using a Repository pattern. For my test project, I am implementing my repository interface and have a set of "fake" object data I am using for test purposes. I ran into an issue trying to get the Contains clause to work for case invariant search. My code snippet for both my test and the repository class used against the database is as follows: if (!string.IsNullOrEmpty(Description)) { items = items.Where(r => r.Description.ToLower().Contains(Description.ToLower())); } However, when I ran my test cases the results where not populated if my case did not match the underlying data. I tried looking into what I thought was an issue for a while. To clear my mind, I went for a run and wondered if the same code with EF would work against a SQL back end database, since SQL will explicitly support the like command and it executed as I expected, using the same logic. I understand why EF against the database back end supports the Contains clause. However, I was surprised that my unit tests did not. Any ideas why other than the SQL server support of the like clause when I use objects I populate in a collection instead of against the database server? Thanks! John

    Read the article

  • Using ScriptingBridge framework for communicating with Entourage

    - by Subramanian Ganapathy
    Hi, The motivation for my question is the following doc, which describes how mail.app could be integrated using ScriptingBridge: http://developer.apple.com/mac/library/samplecode/SBSendEmail/Introduction/Intro.html I tried to apply a similar technique with Entourage as well but could not get any results so far. I understand that using AppleScript would help me solve my problem and mactech.com has extensive documentation for doing so. But i find this ScriptingBridge technique elegant and want to figure why it is not working for me with Entourage. The biggest problem seems to be my inability to create Scripting classes based on their names as it happens in Mail because Entourage has a different interface than Mail as their headers indicate. Could someone please tell me what I am missing or provide any sort of hint on why this wont work? I am also adding sample code ` MicrosoftEntourageApplication * mail = [SBApplication applicationWithBundleIdentifier:@"com.Microsoft.Entourage"]; MicrosoftEntourageOutgoingEmailMessage * emailMessage = [[[mail classForScriptingClass:@"outgoing message"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: @"my sample subject", @"subject", @"my sample body", @"content", nil]]; //then i create a set of recipients and try to use "to recipient" as the string scripting class id, but MicrosoftEntourageRecipient is returned as nil MicrosoftEntourageRecipient * theRecipient = [[[mail classForScriptingClass:@"to recipient"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: @"[email protected]", @"address", nil]]; ` I am trying to make the simple thing work, I am not even concentrating on the task I am supposed to do now. I am a Cocoa beginner( and willing to learn ), please excuse an syntactic naivetes and do point them out in the sample code, in addition to answering my question. Best Regards, Subramanian

    Read the article

  • Is it normal for C++ static initialization to appear twice in the same backtrace?

    - by Joseph Garvin
    I'm trying to debug a C++ program compiled with GCC that freezes at startup. GCC mutex protects function's static local variables, and it appears that waiting to acquire such a lock is why it freezes. How this happens is rather confusing. First module A's static initialization occurs (there are __static_init functions GCC invokes that are visible in the backtrace), which calls a function Foo(), that has a static local variable. The static local variable is an object who's constructor calls through several layers of functions, then suddenly the backtrace has a few ??'s, and then it's is in the static initialization of a second module B (the __static functions occur all over again), which then calls Foo(), but since Foo() never returned the first time the mutex on the local static variable is still set, and it locks. How can one static init trigger another? My first theory was shared libraries -- that module A would be calling some function in module B that would cause module B to load, thus triggering B's static init, but that doesn't appear to be the case. Module A doesn't use module B at all. So I have a second (and horrifying) guess. Say that: Module A uses some templated function or a function in a templated class, e.g. foo<int>::bar() Module B also uses foo<int>::bar() Module A doesn't depend on module B at all At link time, the linker has two instances of foo<int>::bar(), but this is OK because template functions are marked as weak symbols... At runtime, module A calls foo<int>::bar, and the static init of module B is triggered, even though module B doesn't depend on module A! Why? Because the linker decided to go with module B's instance of foo::bar instead of module A's instance at link time. Is this particular scenario valid? Or should one module's static init never trigger static init in another module?

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >