Search Results

Search found 3678 results on 148 pages for 'weird'.

Page 123/148 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • White Screen of Death (WSOD) in Browser

    - by nickyt
    Here's the specs: ASP.NET 3.5 using ASP.NET AJAX AJAX Control Toolkit jQuery 1.3.2 web services IIS6 on Windows Server 2003 SP1 SP1 SQLServer 2005 SP3 Site is SSL Here's the problem: I'm getting the White Screen of Death (WSOD) in pretty much any browser (at least FireFox and IE 7/8). We have an application that uses one popup window for updating records. Most of the time when you click on the [Edit] button to edit a record, the popup window opens and loads the update page. However, after editing records for a while, all of a sudden the popup window will open, but it stays blank and just hangs. The URL is in the address bar. Loading up Fiddler I noticed that the request for the update page is never sent which leads me to believe it's some kind of lockup on the client-side. If I copy the same URL that's in the popup window into a new browser window, the page generally loads fine. Observations: - Since the request is never sent to the server, it's definitely something client-side - Only appears to happen when there is some semblance of traffic on the site which is weird because this appears to be contained within client-side code - There is a web service being called in the background every few seconds checking if the user is logged on, but this doesn't cause the freeze. I'm really at a loss here. I've googled WSOD but not much seems to appear related to my specific WSOD. Any ideas?

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • Git. Checkout feature branch between merge commits

    - by mageslayer
    Hi all It's kind weird, but I can't fulfill a pretty common operation with git. Basically what I want is to checkout a feature branch, not using it's head but using SHA id. This SHA points between merges from master branch. The problem is that all I get is just master branch without a commits from feature branch. Currently I'm trying to fix a regression introduced earlier in master branch. Just to be more descriptive, I crafted a small bash script to recreate a problem repository: #!/bin/bash rm -rf ./.git git init echo "test1" > test1.txt git add test1.txt git commit -m "test1" -a git checkout -b patches master echo "test2" > test2.txt git add test2.txt git commit -m "test2" -a git checkout master echo "test3" > test3.txt git add test3.txt git commit -m "test3" -a echo "test4" > test4.txt git add test4.txt git commit -m "test4" -a echo "test5" > test5.txt git add test5.txt git commit -m "test5" -a git checkout patches git merge master #Now how to get a branch having all commits from patches + test3.txt + test4.txt - test5.txt ??? Basically all I want is just to checkout branch "patches" with files 1-4, but not including test5.txt. Doing: git checkout [sha_where_test4.txt_entered] ... just gives a branch with test1,test3,test4, but excluding test2.txt Thanks.

    Read the article

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • PHP weirdness extending IMagick class

    - by Jamie Carl
    This is a really weird one. I have some code that is happily working on version 2.1.1RC1 of the php5-imagick module. It's basically just a class I wrote that extends the Imagick class and manages images stored in a database. Since upgrading to version 3.0.0RC1 (thankfully only on my dev box) things have gone to hell. It seems that object members are writeable but are NOT readable. Take the following sample code: class db_image extends IMagick { private $data; function __construct( $id = null ){ parent::__construct(); $this->data = 'some plain text'; echo $this->data; } This will output absolutely NOTHING. My debugger indicates that the contents of $this-data are the correct string value, but I am unable to read the value back out of the member variable. Seriously. WTF? Does anyone know what is causing this or has seen it before? I don't even know how to replicate this behaviour in my own classes.

    Read the article

  • can javascript process binary data?

    - by Johnny
    admit me describe my questions in situation-oriented way: assume IE is still the dominate web browser(the firefox have document for binary processing): the XMLHttpRequest.responseText or XMLHttpRequest.responseXML in ie desire txt or xml/xhtml/html,but what about the server response the xmlHttprequest whith MIME TYPE application/octet ? would the response string all little than 256 ?(every char of that string < 256), thanks very much for a straight answer, i have no webserver env,so i don't know how to test it out. because use txt or xml have a issue of character set encode, and i don't know how to process #[[[CDDATA node of one encoded xml(ex : utf-8,ascii,gb18030) with javascript, when i getNodeText, does the docObj return me byte or decoded char ? if it was decoded char which according to the header indicated charSet in the httpresponse , it would be all wrong. to avoid mess up with charSet ,i would like the server to response octet data and force strings data to be encoded as utf-8 but another charSet in the binary format. if the response is octal, so i guess the browser would not try to decode the response"txt" does this weird? or miss understanding the fundamental things? EDIT: I believe the question is asking this: Can Javascript safely process strings that aren't encoded in Unicode? What are the problems with trying to do so? EDIT: no no no , i means if http-header: content-type is "application/octet" , would the ie try to decoded it as (16bits Unicode | ie local setting charset ) when i get XMLHttpRequestobj.responseText use javascript ? or it(ie) just wrap every single byte of the response body as a javascript string, then every char in that string little than or equal 256 (char<=256), am i talking Mars language? sadly, if i were Marsizen,i would come as tourist without fuzzy questions. however i am in a country which share at least one property with Mars : RED

    Read the article

  • dll's loaded through reflection

    - by Seattle Leonard
    Ok, I got a strange one here and I want to know if any of you have ever run accross anything like it. So I've got this web app that loads up a bunch of dll's through reflection. Basically it looks for Types that are derived from certain abstract types and adds them to the list of things it can make. Here's the weird part. While developing there is never a problem. When installing it, there is never a problem to start with. Then, at a seemingly random time, the application breaks while trying to find all the types. I've had 2 sites sitting side by side and one worked while the other did not, and they were configured exactly(and I mean exactly) the same. IISRESET's never helped, but this did: I simply moved all the dll's out of the bin directory then moved them back. That's right I just moved them out of the bin directory then put them right back where they came from and everything worked fine. Any ideas?

    Read the article

  • SQL Server problems reading columns with a foreign key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately By 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) I got a foreign key on ProductGroups ALTER TABLE [dbo].[Article] WITH CHECK ADD CONSTRAINT [FK_ProductGroup_Articles] FOREIGN KEY([ProductGroupID]) REFERENCES [dbo].[ProductGroup] ([ProductGroupID]) GO ALTER TABLE [dbo].[Article] CHECK CONSTRAINT [FK_ProductGroup_Articles] there are some 6000 rows and IsDiscontinued is a bit, not null, but leaving this condition out does not change the outcome. Anyone can tell me how to handle such a situation? More info, anyone? Additional Info: this does not seem to be restricted to this Foreign Key, but all/some referencing this entity.

    Read the article

  • Chrome: Dynamically created <style> tag does not have content?

    - by Shizhidi
    Hello. I encountered a weird problem when trying to write a cross-browser script. Basically my header looks like this <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> </head> Then in the body tag: <p id="hey">Hey</p> <input type="button" value="attachStyle" name="attachStyle" onclick="attachStyle();"></input> <script> function attachStyle() { var strVar=""; strVar += "<style type='text\/css'>#hey {border:5px solid red;}<\/script>"; $("head").append(strVar); } </script> The button works in Firefox, but not in Chrome. When I looked at the html DOM elements in the developer tool, the style tag was inserted but without content, like this: <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <style type='text/css'></script> </head> I'm curious as to what causes this? And how to create CSS style in a way that is cross-browser? Thanks!

    Read the article

  • jQuery/ajax working on IIS5.1 but not IIS6

    - by Mikejh99
    I'm running a weird issue here. I have code that makes jquery ajax calls to a web service and dynamically adds controls using jquery. Everything works fine on my dev machine running IIS 5.1, but not when deployed to IIS 6. I'm using VS2010/ASP.Net 4.0, C#, jQuery 1.4.2 and jQuery UI 1.8.1. I'm using the same browser for each. It partially works though. The code will add the controls to the page, but they aren't visible until I click them (they aren't visible though). I thought this was a css issue, but the styles are there too. The ajax calls look like this: $.ajax({ url: "/WebServices/AssetManager.asmx/Assets", type: "POST", datatype: "json", async: false, data: "{'q':'" + req.term + "', 'type':'Condition'}", contentType: "application/javascript; charset=utf-8", success: function (data) { res($.map(data.d, function (item) { return { label: item.Name, value: item.Name, id: item.Id, datatype: item.DataType } })) } }) Changing the content-type makes the autocomplete fail. I've quadruple checked and all the paths are correct, there is no document footer enabled in IIS, and I'm not using IIS compression. Any idea why the page will display and work properly in IIS 5 but only partially in IIS 6? (If it failed completely, that'd make more sense!). Is it a jQuery or CSS issue?

    Read the article

  • How do i write task? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • Trying to calculate large numbers in Python with gmpy. Python keeps crashing?

    - by Ryan Peschel
    I was recommended to use gmpy to assist with calculating large numbers efficiently. Before I was just using python and my script ran for a day or two and then ran out of memory (not sure how that happened because my program's memory usage should basically be constant throughout.. maybe a memory leak?) Anyways, I keep getting this weird error after running my program for a couple seconds: mp_allocate< 545275904->545275904 > Fatal Python error: mp_allocate failure This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Also, python crashes and Windows 7 gives me the generic python.exe has stopped working dialog. This wasn't happening with using standard python integers. Now that I switch to gmpy I am getting this error just seconds in to running my script. I thought gmpy was specialized in dealing with large number arithmetic? For reference, here is a sample program that produces the error: import gmpy2 p = gmpy2.xmpz(3000000000) s = gmpy2.xmpz(2) M = s**p for x in range(p): s = (s * s) % M I have 10 gigs of RAM and without gmpy this script ran for days without running out of memory (still not sure how that happened considering s never really gets larger.. Anyone have any ideas? EDIT: Forgot to mention I am using Python 3.2

    Read the article

  • Unable to create a breakpoint in Eclipse and Tomcat

    - by Traroth
    Since yesterday, I get a strange error message when I start my Tomcat (6.0.35) under Eclipse Juno (build 20120614-1722): Among the things I tried in order to get rid of the error: Check in Preferences - Java - Compiler if all "Classfile Generation" checkboxes where checked. I did that for both general preferences and project preference Uncheck them, build, check them build again (found on another question) Add org.eclipse.jdt.core.compiler.debug.lineNumber=generate to my .settings/org.eclipse.jdt.core.prefs file Use a new CVS checkout (same symptoms) And now, I don't know what to do anymore. The problem is really stopping me from get anything done. I can't work anymore. Crazy is: the problem doesn't happen on every class, only on some of them. Neither does it happen on my other Eclipse projects. And it didn't happen before yesterday, even if I can't remeber having done anything weird. Actually, I have never seen a problem like this in almost 10 years I'm using Eclipse... If you have any idea, I would be really grateful... Edit: I also tried to ignore the message and go on with my tests: If I create another breakpoint upstream from my problematic class, when I enter this problematic class, it tries to open a $Proxy132 class, which means it actually opens an empty page, with a source not found message

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • Getting error when compiling debug mode: C++/CLI - error LNK2022

    - by Yochai Timmer
    I've got a CLI code wrapping a C++ DLL. When i try to compile it in debug mode, i get the following error: Error 22 error LNK2022: metadata operation failed (8013118D) : Inconsistent layout information induplicated types .... MSVCMRTD.lib (locale0_implib.obj) The weird thing is that on Release mode it compiles OK and works OK. The only difference i can see that causes the problem is when i change: Configuration Properties - C/C++ - Code Generation - Runtime Library When it's set to: Multi-threaded Debug DLL (/MDd) it throws the error. When it's set to: Multi-threaded DLL (/MD) it compiles fine. The same settings work for all the other DLLs in the project (CLI and C++) and they inherit the same properties. I'm using VS2010. So, how can i solve this ? And can I get some explanation to WHY this is happening ? Update: I've basically tried changing every option in the project's properties with no luck. I've read somewhere that this might be caused from duplicate declarations of a type of the same name. But in the CLI file i'm calling std::string etc. explicitly from std. Any other ideas ?

    Read the article

  • why does entity framework+mysql provider enumeration returns partial results with no exceptions

    - by Freddy Rios
    I'm trying to make sense of a situation I have using entity framework on .net 3.5 sp1 + MySQL 6.1.2.0 as the provider. It involves the following code: Response.Write("Products: " + plist.Count() + "<br />"); var total = 0; foreach (var p in plist) { //... some actions total++; //... other actions } Response.Write("Total Products Checked: " + total + "<br />"); Basically the total products is varying on each run, and it isn't matching the full total in plist. Its varies widely, from ~ 1/5th to half. There isn't any control flow code inside the foreach i.e. no break, continue, try/catch, conditions around total++, anything that could affect the count. As confirmation, there are other totals captured inside the loop related to the actions, and those match the lower and higher total runs. I don't find any reason to the above, other than something in entity framework or the mysql provider that causes it to end the foreach when retrieving an item. The body of the foreach can have some good variation in time, as the actions involve file & network access, my best shot at the time is that when the .net code takes beyond certain threshold there is some type of timeout in the underlying framework/provider and instead of causing an exception it is silently reporting no more items for enumeration. Can anyone give some light in the above scenario and/or confirm if the entity framework/mysql provider has the above behavior? Update: I can't reproduce the behavior by using Thread.Sleep in a simple foreach in a test project, not sure where else to look for this weird behavior :(.

    Read the article

  • What do the 'size' numbers mean in the windbg !heap output?

    - by pj4533
    I see output like this in my DMP file: Heap entries for Segment00 in Heap 00150000 00150640: 00640 . 00040 [01] - busy (40) 00150680: 00040 . 01808 [01] - busy (1800) 00151e88: 01808 . 00210 [01] - busy (208) 00152098: 00210 . 00228 [00] 001522c0: 00228 . 00030 [01] - busy (22) 001522f0: 00030 . 00018 [01] - busy (10) 00152308: 00018 . 00048 [01] - busy (3c) The WinDbg docs say this: Heap entries for Segment00 in Heap 250000 0x01 - HEAP_ENTRY_BUSY 0x02 - HEAP_ENTRY_EXTRA_PRESENT 0x04 - HEAP_ENTRY_FILL_PATTERN 0x08 - HEAP_ENTRY_VIRTUAL_ALLOC 0x10 - HEAP_ENTRY_LAST_ENTRY 0x20 - HEAP_ENTRY_SETTABLE_FLAG1 0x40 - HEAP_ENTRY_SETTABLE_FLAG2 Entry Prev Cur 0x80 - HEAP_ENTRY_SETTABLE_FLAG3 Address Size Size flags (Bytes used) (Tag name) 00250000: 00000 . 00b90 [01] - busy (b90) 00250b90: 00b90 . 00038 [01] - busy (38) 00250bc8: 00038 . 00040 [07] - busy (24), tail fill (NTDLL!LDR Database) The spacing is weird in the docs though. Does that mean 'entry address' and 'prev size' and 'cur size', or are the 'entry' 'prev' and 'cur' not for the line below? What does 'prev size' and 'cur size' mean? Especially with regard to 'bytes used'. What is the difference between 'bytes used' and 'cur size'?

    Read the article

  • PHP OOP singleton doesn't return object

    - by Misiur
    Weird trouble. I've used singleton multiple times but this particular case just doesn't want to work. Dump says that instance is null. define('ROOT', "/"); define('INC', 'includes/'); define('CLS', 'classes/'); require_once(CLS.'Core/Core.class.php'); $core = Core::getInstance(); var_dump($core->instance); $core->settings(INC.'config.php'); $core->go(); Core class class Core { static $instance; public $db; public $created = false; private function __construct() { $this->created = true; } static function getInstance() { if(!self::$instance) { self::$instance = new Core(); } else { return self::$instance; } } public function settings($path = null) { ... } public function go() { ... } } Error code Fatal error: Call to a member function settings() on a non-object in path It's possibly some stupid typo, but I don't have any errors in my editor. Thanks for the fast responses as always.

    Read the article

  • Why is my Repeater null in code behind?

    - by Rob Stevenson-Leggett
    I'm just starting a new project and I am getting some really weird stuff happening. ASP.NET 3.5, VS2008. I've tried rebuild, close VS, delete everything and get from svn again but I cannot understand why the repeater in the following is null on page_load. I know this is going to be a headslapping moment. Help me out? Markup: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="GalleryControl.ascx.cs" Inherits="Site.UserControls.GalleryControl" %> <asp:Repeater ID="rptGalleries" runat="server"> <HeaderTemplate><ul></HeaderTemplate> <ItemTemplate> <li>wqe</li> </ItemTemplate> <FooterTemplate></ul></FooterTemplate> </asp:Repeater> Code behind public partial class GalleryControl : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { rptGalleries.DataSource = new[] {1, 2, 3, 4, 5}; rptGalleries.DataBind(); } } Why is my repeater null? What the F is going on?

    Read the article

  • JSF, writeAttribute("value", str, null) fails with strings obtained through ValueExpression.getValue

    - by Roma
    Hello, I'm having a somewhat weird problem with custom JSF component. Here's my renderer code: public class Test extends Renderer { public void encodeBegin(final FacesContext context, final UIComponent component) throws IOException { ResponseWriter writer = context.getResponseWriter(); writer.startElement("textarea", component); String clientId = component.getClientId(context); if (clientId != null) writer.writeAttribute("name", clientId, null); ValueExpression exp = component.getValueExpression("value"); if (exp != null && exp.getValue(context.getELContext()) != null) { String val = (String) exp.getValue(context.getELContext()); System.out.println("Value: " + val); writer.writeAttribute("value", val, null); } writer.endElement("textarea"); writer.flush(); } } The code generates: <textarea name="j_id2:j_id12:j_id19" value=""></textarea> Property binding contains "test" and as it should be, "Value: test" is successfully printed to console. Now, if I change the code to: public void encodeBegin(final FacesContext context, final UIComponent component) throws IOException { ResponseWriter writer = context.getResponseWriter(); writer.startElement("textarea", component); String clientId = component.getClientId(context); if (clientId != null) writer.writeAttribute("name", clientId, null); ValueExpression exp = component.getValueExpression("value"); if (exp != null && exp.getValue(context.getELContext()) != null) { String val = (String) exp.getValue(context.getELContext()); String str = "new string"; System.out.println("Value1: " + val + ", Value2: " + str); writer.writeAttribute("value", str, null); } writer.endElement("textarea"); writer.flush(); } generated html is: <textarea name="j_id2:j_id12:j_id19" value="new string"></textarea> and console output is "Value1: test, Value2: new string" What's going on here? This doesn't make sense. Why would writeAttribute differentiate between the two strings? Additional info: The component extends UIComponentBase I am using it with facelets

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • ASPX page renders differently when reached on intranet vs. internet?

    - by MattSlay
    This is so odd to me.. I have IIS 5 running on XP and it's hosting a small ASP.Net app for our LAN that we can access by using the computer name, virtual directory, and page name (http://matt/smallapp/customers.aspx), but you can also hit that IIS server and page from the internet because I have a public IP that my firewall routes to the "Matt" computer (like http://213.202.3.88/smallapp/customers.aspx [just a made-up IP]). Don't worry, I have Windows domain authentication is in place to protect the app from anonymous users. So all the abovea parts works fine. But what's weird is that the Border of the divs on the page are rendered much thicker when you access the page from the intranet, versus the internet, (I'm using IE8) and also, some of the div layout (stretching and such) acts differently. Why would it render different in the same browser based on whether it was reached from the LAN vs. the internet? It does NOT do this in FireFox. So it must be just an IE8 thing. All the CSS for the divs is right in the HTML page, so I do not think it is a caching matter of a CSS file. Notice how the borders are different in these two images: Internet: http://twitpic.com/hxx91 . Lan: http://twitpic.com/hxxtv

    Read the article

  • Windows 8 Data Binding Bug - OnPropertyChanged Updates Wrong Object

    - by Andrew
    I'm experiencing some really weird behavior with data binding in Windows 8. I have a combobox set up like this: <ComboBox VerticalAlignment="Center" Margin="0,18,0,0" HorizontalAlignment="Right" Height="Auto" Width="138" Background="{StaticResource DarkBackgroundBrush}" BorderThickness="0" ItemsSource="{Binding CurrentForum.SortValues}" SelectedItem="{Binding CurrentForum.CurrentSort, Mode=TwoWay}"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock HorizontalAlignment="Right" Text="{Binding Converter={StaticResource SortValueConverter}}"/> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> Inside of a page with the DataContext set to a statically located ViewModel. When I change that ViewModel's CurrentForm attribute, who's property is implemented like this... public FormViewModel CurrentForm { get { return _currentForm; } set { _currentForm = value; if (!_currentForm.IsLoaded) { _currentSubreddit.Refresh.Execute(null); } RaisePropertyChanged("CurrentForm"); } } ... something really strange happens. The previous FormViewModel's CurrentSort property is changed to the new FormViewModel's current sort property. This happens as the RaisePropertyChanged event is called, through a managed-to-native transition, with native code invoking the setter of CurrentSort of the previous FormViewModel. Does that sound like a bug in Win8's data binding? Am I doing something wrong?

    Read the article

  • Installing Rails on Mountain Lion

    - by Jordan Medlock
    I was wondering if you could help me find why I cannot install Ruby on Rails on my MBP with OS X Mountain Lion. It's a weird problem and I'll give you as much info as I can. I've installed ruby and it's working at version 1.9.3 And I've installed ruby gems and it's worked for every other gem I've tried to install. It's version is 1.8.24 When I run $ sudo gem install rails it replies with the message: Successfully installed rails-3.2.8 1 gem installed Although when I ask it rails -v it returns: `Rails is not currently installed on this system. To get the latest version, simply type: $ sudo gem install rails You can then rerun your "rails" command.` What should I do? The rails bash file (/usr/bin/rails) contains: #!/usr/bin/ruby # Stub rails command to load rails from Gems or print an error if not installed. require 'rubygems' version = ">= 0" if ARGV.first =~ /^_(.*)_$/ and Gem::Version.correct? $1 then version = $1 ARGV.shift end begin gem 'railties', version or raise rescue Exception puts 'Rails is not currently installed on this system. To get the latest version, simply type:' puts puts ' $ sudo gem install rails' puts puts 'You can then rerun your "rails" command.' exit 0 end load Gem.bin_path('railties', 'rails', version) That must mean that the gem files aren't there or are old or corrupted How can I check that?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >