Search Results

Search found 10429 results on 418 pages for 'high resolution'.

Page 53/418 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • High memory usage for dummies

    - by zaf
    I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.

    Read the article

  • Best way to display a "High Scores" Results

    - by George
    First, I would to thank everyone for all the help they provide via this website. It has gotten me to the point of almost being able to release my first iPhone app! Okay, so the last part I have is this: I have a game that allows users to save their high scores. I update a plist file which contains the users Name, Level, and score. Now I want to create a screen that will display the top 20 high scores. What would be the best way to do this? At first I thought possibly creating an HTML file with this info but am not even sure if that is possible. I would need to read the plist file, and then write it out as HTML. Is this possible? To write a file out as HTML? Or an even better question, is there a better way? Thanks in advance for any and all help! Geo...

    Read the article

  • Moving a high quality line on a panel c#

    - by user1787601
    I want to draw a line on a panel and then move it as the mouse moves. To do so, I draw the line and when the mouse moves I redraw the line to the new location and remove the previous line by drawing a line with the background color on it. It works fine if I do not use the high quality smoothing mode. But if use high quality smoothing mode, it leave traces on the panel. Does anybody know how to fix this? Thank you. Here is the code int x_previous = 0; int y_previous = 0; private void panel1_MouseMove(object sender, MouseEventArgs e) { Pen pen1 = new System.Drawing.Pen(Color.Black, 3); Pen pen2 = new System.Drawing.Pen(panel1.BackColor, 3); Graphics g = panel1.CreateGraphics(); g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality; g.DrawLine(pen2, new Point(0, 0), new Point(x_previous, y_previous)); g.DrawLine(pen1, new Point(0, 0), new Point(e.Location.X, e.Location.Y)); x_previous = e.Location.X; y_previous = e.Location.Y; } Here is the snapshot with SmoothingMode Here is the snapshot without SmoothingMode

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • Keepalived for more than 20 virtual addresses

    - by cvaldemar
    I have set up keepalived on two Debian machines for high availability, but I've run into the maximum number of virtual IP's I can assign to my vrrp_instance. How would I go about configuring and failing over 20+ virtual IP's? This is the, very simple, setup: LB01: 10.200.85.1 LB02: 10.200.85.2 Virtual IPs: 10.200.85.100 - 10.200.85.200 Each machine is also running Apache (later Nginx) binding on the virtual IPs for SSL client certificate termination and proxying to backend webservers. The reason I need so many VIP's is the inability to use VirtualHost on HTTPS. This is my keepalived.conf: vrrp_script chk_apache2 { script "killall -0 apache2" interval 2 weight 2 } vrrp_instance VI_1 { interface eth0 state MASTER virtual_router_id 51 priority 101 virtual_ipaddress { 10.200.85.100 . . all the way to . 10.200.85.200 } An identical configuration is on the BACKUP machine, and it's working fine, but only up to the 20th IP. I have found a HOWTO discussing this problem. Basically, they suggest having just one VIP and routing all traffic "via" this one IP, and "all will be well". Is this a good approach? I'm running pfSense firewalls in front of the machines. Quote from the above link: ip route add $VNET/N via $VIP or route add $VNET netmask w.x.y.z gw $VIP Thanks in advance. EDIT: @David Schwartz said it would make sense to add a route, so I tried adding a static route to the pfSense firewall, but that didn't work as I expected it would. pfSense route: Interface: LAN Destination network: 10.200.85.200/32 (virtual IP) Gateway: 10.200.85.100 (floating virtual IP) Description: Route to VIP .100 I also made sure I had packet forwarding enabled on my hosts: $ cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.ip_nonlocal_bind=1 Am I doing this wrong? I also removed all VIPs from the keepalived.conf so it only fails over 10.200.85.100.

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • Sporadic name resolution failure happening on web service call

    - by ansleygal
    One of our wcf service applications calls a seperate third party web service to submit information. We are getting the following error every so often, but not all the time: System.Net.WebException: The remote name could not be resolved: 'ws.examplesite.net' at System.Net.HttpWebRequest.GetRequestStream() at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) The wierd thing is that after the error happens, we can hit "Submit" again a second later and it will go through just fine. We have checked and double checked with our network guys and they have confirmed that DNS is correct, and they have done multiple nslookups in a row to confirm. This is happening in all environments (dev, test and prod). We use the third party test and prod urls, and it is happening when we point to both. Does anyone have any other trouble shooting techniques for this or any reason this would happen? Much thanks, ~Ansley

    Read the article

  • Couchdb conflict resolution

    - by Sundar
    How does CouchDB handles conflicts while doing bi-directional replication? For example: Lets say there are two address book databases (in server A and B). There is a document for Jack which contains contact details of Jack. Server A and B are replicated and both have the same version of Jack document. In server A, Jack's mobile no is updated. In server B, Jack's address is updated. Now when we do bi-directional replication there is a conflict. How does couchDB handles it? If we initiate replication in a Java program, is there a way to know whether there were any conflicts from the java program?

    Read the article

  • Any high-profile open source finance projects?

    - by Gayle
    Is there a high profile open source project in the finance industry - specifically the investment banking area - that I could contribute to (ideally .NET)? I'd like to beef up my resume in this field. I would prefer something in the algorithmic trading field, but am open to any route (e.g. front-office applications, etc).

    Read the article

  • High performance web (-services) applications

    - by User Friendly
    Hi, I'd like to become a guru in high performance web & web-services applications. What technologies/patterns/skills do you reccomend to look at? Basically, I have good skills at ASP.NET/.NET based web development, but I'd like to know how big things are built (on any platform, not depending on .net technology stack). Thank you.

    Read the article

  • Subclipse > Accidental Merge Conflict Resolution

    - by DTS
    I'm trying to merge changes from one branch into another using Subclipse. On a particular file in a particular subdirectory, I had a file conflict and edited the conflicts via the context menu option for this. However, when I went to resolve the conflict I apparently chose the wrong option and was left with the original unmerged file in my branch. Since then, I can no longer get this file back into a conflicted state so I can resolve this issue properly. I've tried deleting the file and the directory that contains it, to no avail. Any ideas?

    Read the article

  • A Linker Resolution Problem in a C++ Program

    - by Vlad
    We have two source files, a.cpp and b.cpp and a header file named constructions.h. We define a simple C++ class with the same name (class M, for instance) in each source file, respectively. The file a.cpp looks like this: #include "iostream" #include "constructions.h" class M { int i; public: M(): i( -1 ) { cout << "M() from a.cpp" << endl; } M( int a ) : i( a ) { cout << "M(int) from a.cpp, i: " << i << endl; } M( const M& b ) { i = b.i; cout << "M(M&) from a.cpp, i: " << i << endl; } M& operator = ( M& b ) { i = b.i; cout << "M::operator =(), i: " << i << endl; return *this; } virtual ~M(){ cout << "M::~M() from a.cpp" << endl; } operator int() { cout << "M::operator int() from a.cpp" << endl; return i; } }; void test1() { cout << endl << "Example 1" << endl; M b1; cout << "b1: " << b1 << endl; cout << endl << "Example 2" << endl; M b2 = 5; cout << "b2: " << b2 << endl; cout << endl << "Example 3" << endl; M b3(6); cout << "b3: " << b3 << endl; cout << endl << "Example 4" << endl; M b4 = b1; cout << "b4: " << b4 << endl; cout << endl << "Example 5" << endl; M b5; b5 = b2; cout << "b5: " << b5 << endl; } int main(int argc, char* argv[]) { test1(); test2(); cin.get(); return 0; } The file b.cpp looks like this: #include "iostream" #include "constructions.h" class M { public: M() { cout << "M() from b.cpp" << endl; } ~M() { cout << "M::~M() from b.cpp" << endl; } }; void test2() { M m; } Finally, the file constructions.h contains only the declaration of the function "test2()" (which is defined in "b.cpp"), so that it can be used in "a.cpp": using namespace std; void test2(); We compiled and linked these three files using either VS2005 or the GNU 4.1.0 compiler and the 2.16.91 ld linker under Suse. The results are surprising and different between the two build environments. But in both cases it looks like the linker gets confused about which definition of the class M it should use. If we comment out the definition of test2() from b.cpp and its invocation from a.cpp, then all the C++ objects created in test1() are of the type M defined in a.cpp and the program executes normally under Windows and Suse. Here is the run output under Windows: Example 1 M() from a.cpp M::operator int() from a.cpp b1: -1 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: -1 M::operator int() from a.cpp b4: -1 Example 5 M() from a.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp If we enable the definition of test2() in "b.cpp" but comment out its invocation from main(), then the results are different. Under Suse, the C++ objects created in test1() are still of the type M defined in a.cpp and the program still seems to execute normally. The VS2005 versions behave differently in Debug or Release mode: in Debug mode, the program still seems to execute normally, but in Release mode, b1 and b5 are of the type M defined in b.cpp (as the constructor invocation proves), although the other member functions called (including the destructor), belong to M defined in a.cpp. Here is the run output for the executable built in Release mode: Example 1 M() from b.cpp M::operator int() from a.cpp b1: 4206872 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: 4206872 M::operator int() from a.cpp b4: 4206872 Example 5 M() from b.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp Finally, if we allow the call to test2() from main, the program misbehaves in all circumstances (that is under Suse and under Windows in both Debug and Release modes). The Windows-Debug version finds a memory corruption around the variable m, defined in test2(). Here is the Windows output in Release mode (test2() seems to have created an instance of M defined in b.cpp): Example 1 M() from b.cpp M::operator int() from a.cpp b1: 4206872 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: 4206872 M::operator int() from a.cpp b4: 4206872 Example 5 M() from b.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M() from b.cpp M::~M() from b.cpp And here is the Suse output. The objects created in test1() are of the type M defined in a.cpp but the object created in test2() is also of the type M defined in a.cpp, unlike the object created under Windows which is of the type M defined in b.cpp. The program crashed in the end: Example 1 M() from a.cpp M::operator int() from a.cpp b1: -1 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: -1 M::operator int() from a.cpp b4: -1 Example 5 M() from a.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M() from a.cpp M::~M() from a.cpp Segmentation fault (core dumped) I couldn't make the angle brackets appear using Markdown, so I used quotes around the header file name iostream. Otherwise, the code could be copied verbatim and tried. It is purely scholastic. The statement cin.get() at the end of main() was included just to facilitate running the program directly from VS2005 (cause it to display the output window until we could analyze the output). We are looking for a software engineer in Sunnyvale, CA and may offer that position to the programmer capable of providing an intelligent and comprehensive explanation of these anomalies. I can be contacted at [email protected].

    Read the article

  • IWshShortcut Target Resolution in Windows 7

    - by Dan Walker
    I've got some code to read shortcuts using the Windows Script Host, but it appears to have a problem in Windows 7. When reading shortcuts, if there is an environment variable in the target path, it resolves to the wrong drive. For example, the shortcut to Notepad resolves to D:\Windows\system32\notepad.exe instead of C:\Windows\system32\notepad.exe. The problem is not with my computer's settings, because the shortcut works just fine, and when looking at the value for %SystemRoot%, it shows C:\Windows. Any ideas as to what could be the problem, or alternatively, what a different method of reading shortcuts would be? Thanks, Dan

    Read the article

  • Compressing High Resolution Satellite Images

    - by Monika
    Hi! Please advise the best way to compress satellite Image. Details Uncompressed size - 60 gb Uncompressed format - IMG 4 Bands (To be retained after compression) Preferred compression format - JPEG2000 Lossy enough to aid in Visual analysis. Thanks Monika

    Read the article

  • SEO Canonical Issue resolution on iis

    - by kacalapy
    i have a site running on IIS that i have Canonical Issue with. the error is: The page with URL "http://www.site.org/images/join_forum.gif" can also be accessed by using URL "https://www.site.org/images/join_forum.gif".Search engines identify unique pages by using URLs. When a single page can be accessed by using any one of multiple URLs, a search engine assumes that there are multiple unique pages. Use a single URL to reference a page to prevent dilution of page relevance. You can prevent dilution by following a standard URL format. how can i resolve this?

    Read the article

  • Delphi/Pascal training in high school/college/university

    - by Bruce McGee
    Are Delphi/Pascal being taught in any high schools/colleges/universities, particularly in Canada and the US? I was surprised how many schools in the UK are teaching Delphi. Their largest exam board is even dropping PHP/C#/C in 2011 and encouraging Delphi. I also remember that CodeGear was going to provide development tool licenses to Russian schools a couple of years ago. I'd like to know if it's being taught closer to (my) home.

    Read the article

  • How to cascade dependency resolution w/ CDI (WELD)

    - by mP
    I would like to have a central weld container that holds all my services and so on. This container would however be wrapped by a second container which contains local settings. THe goal is if a depdendency cannot be found in the outter container then i would like to then search the inner container. How can i achieve this ? I would prefer to do things in a standlike manner, without reverting to use of non standard WELD extensions.

    Read the article

  • Resolution Problem with HttpRequestScoped in Autofac

    - by Page Brooks
    I'm trying to resolve the AccountController in my application, but it seems that I have a lifetime scoping issue. builder.Register(c => new MyDataContext(connectionString)).As<IDatabase>().HttpRequestScoped(); builder.Register(c => new UnitOfWork(c.Resolve<IDatabase>())).As<IUnitOfWork>().HttpRequestScoped(); builder.Register(c => new AccountService(c.Resolve<IDatabase>())).As<IAccountService>().InstancePerLifetimeScope(); builder.Register(c => new AccountController(c.Resolve<IAccountService>())).InstancePerDependency(); I need MyDataContext and UnitOfWork to be scoped at the HttpRequestLevel. When I try to resolve the AccountController, I get the following error: No scope matching the expression 'value(Autofac.Builder.RegistrationBuilder`3+<c__DisplayClass0[...]).lifetimeScopeTag.Equals(scope.Tag)' is visible from the scope in which the instance was requested. Do I have my dependency lifetimes set up incorrectly?

    Read the article

  • How to set column width of gridview with respective resolution

    - by Royson
    My gridview has 4 columns. In my machine it is displayed properly according to my code. but when i tried to run my app on another machines my column height gets increased and header text of two column displayed on two-line. Like: MyAllFolder Path It should be displayed on same line with changing width. How to dynamically change column width to set column header text in one line??

    Read the article

  • PHP Scope Resolution Operator Question

    - by anthony
    I'm having trouble with the MyClass::function(); style of calling methods and can't figure out why. Here's an example (I'm using Kohana framework btw): class Test_Core { public $var1 = "lots of testing"; public function output() { $print_out = $this->var1; echo $print_out; } } I try to use the following to call it, but it returns $var1 as undefined: Test::output() However, this works fine: $test = new Test(); $test->output(); I generally use this style of calling objects as opposed to the "new Class" style, but I can't figure out why it doesn't want to work.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >