Search Results

Search found 24498 results on 980 pages for 'lock pages in memory'.

Page 276/980 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • Windows server response time very high

    - by Nagaraju Bandla
    Server Specs Windows Server 2008 R2 64 bit Provider : Fasthosts .Net Framework: 4.0 6 GB RAM (its using 4.6 GB) i have a website with thousands of pages structured like folderone/1/one to 500.aspx folderone/2/one to 500.aspx . . folderone/500/one to 500.aspx To load this pages for the first time after the release, for each folder it takes about 20 to 30 minutes and once one page is loaded the rest of the pages loads fine. This happens for all folders. And this repeats every time i restart the server, when a added anything to app_code or if i change the web.config. My site is mainly works Google and due to this problem its giving errors. Any help will be highly appreciated please. i am happy to buy a beer for you if its resolved. Thanks in advance...

    Read the article

  • Running two Magentos installations, one of which has 3 stores set up as multi-store. Which server?

    - by Pedro Peixoto
    I want to run 4 Magento stores in 2 different installations. 1 is a standalonne installation with 3 languages. The other is a multi-store with 3 different online stores in different domains. At the moment we have a VPS with 1GB memory, would that be enough? I ask because I've finished the standalone store and already put it online, and the server is already running on 62% memory. The ideal would be that this is enough as my company wouldn't like to move to a Dedicated Server (as it involves costs). I'm sure I can try to optimize Magento to run on lower memory (I'm expecting visits averaging 2000/day on all sites), if I could have some tips on the best way to do that Id appreciate it too.

    Read the article

  • When load balancing, must all copies of static web page be exactly the same?

    - by Gilles Blanchette
    I am used to get answers for everything on the web, but not this time... Yesterday I enable Amazon DNS weight functionally to load balance 7 websites between two different IP addresses (split 50%-50%). Both servers run IIS 8.5, sites runs well on both sides. Today I found out that Google WebMasterTools is reporting fails error with file robots.txt, all close to 50% of access try errors. The robots.txt file is ok and accessible (even via Google testing URL page) on both servers. Lets say current version of static web pages are on the first computer and the updated version of the same web pages are on the second computer. Can it be the problem? When load balancing, can static web pages be slightly different from one host server to the other? Thank you for your help

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • Remote Desktop closes with Fatal Error (Error Code: 5)

    - by Swinders
    We have one PC (Windows XP SP3) that we can not log onto using a Remote Desktop session. Logging on to the PC directly (sitting in front of it using the connected keyboard and monitor) work fine. From a second PC (tried a number of different ones but all Windows XP SP3) I run 'mstsc' and type in PC name to connect to. This shows the login box which we can enter the correct login details and click OK. Within a few second we get an error: Title: Fatal Error (Error Code:5) Error: Your Remote Desktop session is about to end. This computer might be low on virtual memory. Close your other programs, and then try connecting to the remote computer again. If the problem continues, contact your network administrator or technical support. None of the computers we are using are low on memory (2Gb+) and we let windows manage the virtual memory side of things. We do not see this with any other PC and do use Remote Desktop in meeting rooms to connect to user PCs with no problems.

    Read the article

  • Google Chrome not using local cache

    - by Steve
    Hi. I've been using Google Chrome as a substitute for Firefox not being able to handle having lots of tabs open at the same time. Unfortunately, it looks like Chrome is having the same problem. Freakin useless. I had to end Chrome as my whole system had slowed to a crawl. When I restarted it, I opted to restore the tabs that were last open. At this stage, every one of the 20+ tabs srated downloading the pages they had previously had open. My question is: why can't they open a locally stored/saved copy of the web page from cache? Does Google Chrome store pages in a cache? Also: after most of the pages had completed their downloading, I clicked on each tab to view the page. Half of them only display a white page, and I have to reload the page manually. What is causing this? Thanks for your help.

    Read the article

  • What does this error mean (Can't create TCP/IP socket (24))?

    - by user105196
    I have web server with OS RHEL 6.2 and Mysql 5.5.23 on another server and the web server can read from Mysql server without problem, but some time I got this error: [Sun Sep 23 06:13:07 2012] [error] [client XXXXX] DBI connect('XXXX:192.168.1.2:3306','XXX',...) failed: Can't create TCP/IP socket (24) at /var/www/html/file.pm line 199. my question : What does this error mean (Can't create TCP/IP socket (24))? is it OS error or Mysql error ? perl -v This is perl, v5.10.1 (*) built for x86_64-linux-thread-multi mysql -V mysql Ver 14.14 Distrib 5.5.23, for Linux (x86_64) using readline 5.1 su - mysql -s /bin/bash -c 'ulimit -a' core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127220 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Varnish cache and PHP session; setting header?

    - by StCee
    Varnish by default would not cache page with cookies. I read on some posts that one workaround for PHP pages is to set header('Cache-Control: public, s-maxage=60'); in php pages. But would it makes Varnish cache the page with the session cookie? Session is started on that page, and although there is nothing personal on that page, I would still want the session to persist in case the user would do something private later. So is there a way to cache the page without the session cookie? And still be able to pass session between pages? I can imagine some sort of weird solution with hidden form, but I would prefer if it can be done with VCL configuration or header setting. Thanks a lot!

    Read the article

  • CPU Configuration Issue for 2 Servers (Server 2008 R2)

    - by Bill Moreland
    I have 2 servers running the exact same Classic ASP code with Access DBs (yes, not ideal, but it is what it is, for now). 1) Xeon 5520 @ 2.27 GHz (6 GB Memory) 2) Xeon E5-2620 @ 2.00 GHz (2 processors, 32 GB Memory) For most pages the newer E5-2620 processes the pages between 10-15% faster. On pages requiring heavy and/or multiple complicated access stored procedures (queries) the older 5520 does a much better job. I believe the servers are configured nearly identically. My question: is it possible that the newer, multi-processor server is not as good at handling Classic ASP as the older single processor? Is there a configuration difference that needs to be in place that I'm missing since I'm shooting for identical implementations?

    Read the article

  • Table Formatting in Word

    - by user359217
    I have a table in Word which is 5 columns wide and multiple rows. In Row 3, cells 1, 2, 3 & 5 have simple text. Cell 4 contains a large quantity of text and therefore needs to wrap over several pages. Therefore, I mark "Allow row to break across pages". Problem: on next page where row has wrapped, cells 1, 2, 3 & 5 are blank with cell 4 displaying the wrapped text. Is there any way that I can get the simple text from Row 3, cells 1, 2 and 3 to repeat on the pages which contain the wrapped text of cell 4? I do not want the data to be in the table heading, as I have multiple rows which have a similar volume of text.

    Read the article

  • How is Javascript parsed/executed in a web browser exactly?

    - by ededed
    For example when I access web server likely Javascript will execute. From there on, how will the browser parse the Javascript, or "execute" the functions, the memory used, etc. How will the browser "handle" all of that? Does it act like a compilation lexer in that it passes line by line and generates object code, or does it use the DOM, and other specifications to handle memory, etc. Also, in terms of updating the page, and alterior concurrent executions as well, such as Flash, HTML, Java, etc. Point be simplified, how does the browser handle the scripts, the memory, and the logic on page from a javascript file?

    Read the article

  • Binding menu items to a sitemap.

    - by Ricardo Deano
    Hello all..this is driving me nuts. I have a navigation menu I would like to display based upon user roles (using.net membership) After several hours and headaches (from banging my head against the desk) I was wondering if someone can point me in the error of my ways. Page: <body> <form runat="server"> <div class="page"> <div class="header"> <div class="loginDisplay"> <asp:LoginView ID="HeadLoginView" runat="server" EnableViewState="false"> <AnonymousTemplate> <a href="~/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> </AnonymousTemplate> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/Open/Close.aspx"/> ] </LoggedInTemplate> </asp:LoginView> </div> <div class="clear hideSkiplink"> <asp:Menu ID="NavigationMenu" runat="server" CssClass="menu" IncludeStyleBlock="False" Orientation="Horizontal" DataSourceID="AugustSiteMap" /> <asp:SiteMapDataSource ID="AugustSiteMap" runat="server" ShowStartingNode="false"/> </div> </div> SiteMap: <?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode url="~/Default.aspx" title="Home" description="Home"> <siteMapNode title="Open Pages" description="Open Pages"> <siteMapNode url="~/Open/Login.aspx" title="Login Page" description="Login Page" roles="*"/> <siteMapNode url="~/Open/Close.aspx" title="Thank you for using Valpak Data Solutions Online Reporting" description="Thank you for using Valpak Data Solutions Online Reporting" roles="*"/> </siteMapNode> <siteMapNode title="Logged In Open Pages" description="Logged In Open Pages"> <siteMapNode url="~/Landing.aspx" title="Landing Page" description="Landing Page" roles="*"/> <siteMapNode url="~/ContactUs.aspx" title="Contact Us" description="Contact Us" roles="*"/> </siteMapNode> <siteMapNode title="Restricted Pages" description="Resticted Pages"> <siteMapNode url="~/Restricted/ProductSearch.aspx" title=" Product Search" description=" Product Search" roles="*"/> <siteMapNode url="~/Restricted/ReportOutput.aspx" title="Report Output" description="Report Output" roles="Admin"/> </siteMapNode> </siteMapNode> </siteMap> Webconfig: <roleManager enabled="true" /> <siteMap defaultProvider="XmlSiteMapProvider" enabled="true"> <providers> <add name="XmlSiteMapProvider" description="AugustSiteMap" type="System.Web.XmlSiteMapProvider " siteMapFile="AugustSiteMap.sitemap" securityTrimmingEnabled="true" /> </providers> </siteMap> How can I ensure that when the user is logged in, the appropriate menu items are displayed on the Landing page? Please excuse my ignorance. Still new to all of this and my current method of 'trial and error' has seen me reach suicide levels this morning!

    Read the article

  • Critique my heap debugger

    - by FredOverflow
    I wrote the following heap debugger in order to demonstrate memory leaks, double deletes and wrong forms of deletes (i.e. trying to delete an array with delete p instead of delete[] p) to beginning programmers. I would love to get some feedback on that from strong C++ programmers because I have never done this before and I'm sure I've done some stupid mistakes. Thanks! #include <cstdlib> #include <iostream> #include <new> namespace { const int ALIGNMENT = 16; const char* const ERR = "*** ERROR: "; int counter = 0; struct heap_debugger { heap_debugger() { std::cerr << "*** heap debugger started\n"; } ~heap_debugger() { std::cerr << "*** heap debugger shutting down\n"; if (counter > 0) { std::cerr << ERR << "failed to release memory " << counter << " times\n"; } else if (counter < 0) { std::cerr << ERR << (-counter) << " double deletes detected\n"; } } } instance; void* allocate(size_t size, const char* kind_of_memory, size_t token) throw (std::bad_alloc) { void* raw = malloc(size + ALIGNMENT); if (raw == 0) throw std::bad_alloc(); *static_cast<size_t*>(raw) = token; void* payload = static_cast<char*>(raw) + ALIGNMENT; ++counter; std::cerr << "*** allocated " << kind_of_memory << " at " << payload << " (" << size << " bytes)\n"; return payload; } void release(void* payload, const char* kind_of_memory, size_t correct_token, size_t wrong_token) throw () { if (payload == 0) return; std::cerr << "*** releasing " << kind_of_memory << " at " << payload << '\n'; --counter; void* raw = static_cast<char*>(payload) - ALIGNMENT; size_t* token = static_cast<size_t*>(raw); if (*token == correct_token) { *token = 0xDEADBEEF; free(raw); } else if (*token == wrong_token) { *token = 0x177E6A7; std::cerr << ERR << "wrong form of delete\n"; } else { std::cerr << ERR << "double delete\n"; } } } void* operator new(size_t size) throw (std::bad_alloc) { return allocate(size, "non-array memory", 0x5AFE6A8D); } void* operator new[](size_t size) throw (std::bad_alloc) { return allocate(size, " array memory", 0x5AFE6A8E); } void operator delete(void* payload) throw () { release(payload, "non-array memory", 0x5AFE6A8D, 0x5AFE6A8E); } void operator delete[](void* payload) throw () { release(payload, " array memory", 0x5AFE6A8E, 0x5AFE6A8D); }

    Read the article

  • Form submission info showing up in URL and not working

    - by kcurtin
    I am making a Rails 3.1 app and have a signup form that was working fine, but I seemed to have changed something to break it.. I'm using Twitter bootstrap and twitter_bootstrap_form_for gem. I made some change that messed with the formatting of the form fields, but more importantly, when I submit the Sign Up form to create a new User, the information is showing up in the URL and looks like this: EDIT: This is happening in the latest versions of Chrome and Firefox http://localhost:3000/?utf8=%E2%9C%93&authenticity_token=UaKG5Y8fuPul2Klx7e2LtdPLTRepBxDM3Zdy8S%2F52W4%3D&user%5Bemail%5D=kevinc%40example.com&user%5Bpassword%5D=testing&user%5Bpassword_confirmation%5D=testing&commit=Sign+Up Here is the code for the form: <div class="span7"> <h3 class="center" id="more">Sign Up Now!</h3> <%= twitter_bootstrap_form_for @user do |user| %> <%= user.email_field :email, :placeholder => '[email protected]' %> <%= user.password_field :password %> <%= user.password_field :password_confirmation, 'Confirm Password' %> <%= user.actions do %> <%= user.submit 'Sign Up' %> <% end %> <% end %> </div> Here is the code for the UsersController: class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(params[:user]) if @user.save redirect_to about_path, :notice => "Signed up!" else render 'new' end end end Not sure if there is more you need but if so let me know! Thank you! Edit: For debugging I tried specifying :post and also using a plain form_for <%= form_for(@user, :method => :post) do |f| %> <div class="field"> <%= f.label :email %> <%= f.email_field :email %> </div> <div class="field"> <%= f.label :password %> <%= f.password_field :password %> </div> <div class="field"> <%= f.label :password_confirmation %> <%= f.password_field :password_confirmation %> </div> <div class="actions"><%= f.submit "Sign Up" %></div> <% end %> This gives me the same problem as above. Adding routes.rb: Auth31::Application.routes.draw do get "home" => "pages#home" get "about" => "pages#about" get "contact" => "pages#contact" get "help" => "pages#help" get "login" => "sessions#new", :as => "login" get "logout" => "sessions#destroy", :as => "logout" get "signup" => "users#new", :as => "signup" root :to => "pages#home" resources :pages resources :users resources :sessions resources :password_resets end

    Read the article

  • Wireless doesn't work on a Lenovo V570

    - by Stephen
    I've had Ubuntu installed on my HD for about 3 months but ever since I ran into this wireless issue I kinda lost my lust of Ubuntu. I have zero experience getting around with/ using the console command. I have a Lenovo V570. I got the driver update for the broadcom networking card via the Additional Drivers application but that did nothing. I love the look and feel of using Ubuntu but I have no technological experience for the matter. Any help would be awesome. When I scan for wireless connections while in Ubuntu, my computer picks up nothing, while on Win7 it will pick up the handful of wireless networks around my area. My wired connection is fine, but the use of not having wireless on a laptop is rather contradictory to it as a feature. Cheers! Also, I just installed 11.10, if that helps any. Yes, I used the search before I posted this, but again I have ZERO understanding of the command stuff and need a meat and potatoes answer(s). stephen@ubuntu:~$ sudo lshw -class network [sudo] password for stephen: *-network UNCLAIMED description: Network controller product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f1900000-f1903fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: 06 serial: f0:de:f1:63:98:14 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.1.78 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:f1804000-f1804fff memory:f1800000-f1803fff stephen@ubuntu:~$ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no

    Read the article

  • Speaker at developer conferences and user group meetings

    Catching up on a couple of sessions I did in the past. This article gives an overview of some of my activities. Mainly at the annual German Visual FoxPro Developer Conference also known as SQL-Server & ASP.NET Conference in Frankfurt. The below listed entries are excerpts from the original Conference Coverage documents you'll find on UniversalThread. German Visual FoxPro Developer Conference 2002 (1 session - Vendor session about Active FoxPro Pages 3.0) German Visual FoxPro Developer Conference 2003 (2.5 sessions - Visual FoxPro running on Linux) German Visual FoxPro Developer Conference 2004 (4 sessions - 2x Active FoxPro Pages, VFP on Linux, and VFP using additional databases) German Visual FoxPro Developer Conference 2005 (4 sessions - RegEx, XML, XSLT, and using free (as in beer) development tools) German Visual FoxPro Developer Conference 2006 (3 sessions - .NET interop via COM, writing own CLR host in VFP, and Active FoxPro Pages) Furthermore, I did a couple of (hopefully) interesting sessions at various user group meetings in Speyer and Stuttgart. A more comprehensive list is available under Presentations (in German language). And last but not least, back in May 2005 Microsoft Germany invited me to host a WebCast for MSDN on how to use 'Visual FoxPro mit Visual Studio 2005'. Unfortunately, I was too unexperienced and too nervous (first time ever), we experienced technical issues with the microphone, and the obviously low quality of recording demanded to replace it by a whole series on Visual FoxPro 9.0. The webcast covered the same topics I already described in other articles here on my blog. Despite the desaster I'd like to thank Ralf Westphal for his kind words afterwards - I really felt bad. Eventually, you might ask yourself why it stopped by the end of 2006... Well, new chapter in my life: Mauritius!

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Google indexing and ranking a custom domain served by Google App Engine

    - by Hugues
    I have a website served on the following URL : "http://www.plugimmo.com" which is a custom domain served by Google App Engine on the following URL : http://plugimmo.appspot.com Since a while I have tried to optimise the Google indexing and ranking with no success. The problem is that searching on Google the keywords in the title of my home page does not retrieve my website at all even not in the 1,000 first results : When checking the cached version of google ( cache:www.plugimmo.com), it says that the cached version is the one of 20-Aug-12 of "plugimmo.appspot.com". It looks there are several issues : 1 - The cached version is really old. I have made a lot of changes since the 20-Aug-12 and I saw the googlebot crawling my site several times. 2 - The cached version is for "plugimmo.appspot.com" 3 - When looking at the Google Webmaster tools, I see that the number of pages indexed for www.plugimmo.com is 0, but that can't be the case given the number of changes I made since then. My questions would therefore be the following : Why is the version of the cache so old although I saw the googlebot crawling the site many times since 20-Aug-12 ? Is there a problem with indexing a custom domain served by Google App Engine ? Why is the Google Webmaster tools showing 0 pages indexed although new pages have been crawled and that no errors have been reported in the indexing ? Also, the site has been developed with Google Web Toolkit. I have followed the guidelines regarding crawling Ajax sites. The home page when crawled by a robot is redirected to http://www.plugimmo.com/HomeSnapshot.html Thanks a lot for your help ! Hugues

    Read the article

  • Mouse pointer strange problem...

    - by Paska
    Hi all, i have last ubuntu installed (10.10), but from an year and thousand updates, video drivers updates, an hundreds of tricks, the mouse pointer is showed like an UGLY square... These are the screenshots: First Second I have no idea what to do to solve this problem. Anyone of you have an idea to solve it? Edit: this problem was encountered from 8.10+! Edit 2, Video card specifications: paska@ubuntu:~$ hwinfo --gfxcard 35: PCI 100.0: 0300 VGA compatible controller (VGA) [Created at pci.318] UDI: /org/freedesktop/Hal/devices/pci_1106_3230 Unique ID: VCu0.QX54AGQKWeE Parent ID: vSkL.CP+qXDDqow8 SysFS ID: /devices/pci0000:00/0000:00:01.0/0000:01:00.0 SysFS BusID: 0000:01:00.0 Hardware Class: graphics card Model: "VIA K8M890CE/K8N890CE [Chrome 9]" Vendor: pci 0x1106 "VIA Technologies, Inc." Device: pci 0x3230 "K8M890CE/K8N890CE [Chrome 9]" SubVendor: pci 0x1043 "ASUSTeK Computer Inc." SubDevice: pci 0x81b5 Revision: 0x11 Memory Range: 0xd0000000-0xdfffffff (rw,prefetchable) Memory Range: 0xfa000000-0xfaffffff (rw,non-prefetchable) Memory Range: 0xfbcf0000-0xfbcfffff (ro,prefetchable,disabled) IRQ: 16 (10026 events) I/O Ports: 0x3c0-0x3df (rw) Module Alias: "pci:v00001106d00003230sv00001043sd000081B5bc03sc00i00" Driver Info #0: Driver Status: viafb is not active Driver Activation Cmd: "modprobe viafb" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (PCI bridge) Primary display adapter: #35 paska@ubuntu:~$ thanks, A

    Read the article

  • SQLAuthority News – Fast Track Data Warehouse 3.0 Reference Guide

    - by pinaldave
    http://msdn.microsoft.com/en-us/library/gg605238.aspx I am very excited that Fast Track Data Warehouse 3.0 reference guide has been announced. As a consultant I have always enjoyed working with Fast Track Data Warehouse project as it truly expresses the potential of the SQL Server Engine. Here is few details of the enhancement of the Fast Track Data Warehouse 3.0 reference architecture. The SQL Server Fast Track Data Warehouse initiative provides a basic methodology and concrete examples for the deployment of balanced hardware and database configuration for a data warehousing workload. Balance is measured across the key components of a SQL Server installation; storage, server, application settings, and configuration settings for each component are evaluated. Description Note FTDW 3.0 Architecture Basic component architecture for FT 3.0 based systems. New Memory Guidelines Minimum and maximum tested memory configurations by server socket count. Additional Startup Options Notes for T-834 and setting for Lock Pages in Memory. Storage Configuration RAID1+0 now standard (RAID1 was used in FT 2.0). Evaluating Fragmentation Query provided for evaluating logical fragmentation. Loading Data Additional options for CI table loads. MCR Additional detail and explanation of FTDW MCR Rating. Read white paper on fast track data warehousing. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Laptop with Intel Graphics: external VGA monitor only gets signal on boot (no "hot plugging")

    - by iveand
    I am able to get an external VGA monitor (or projector) to work if I start my laptop with it connected. However, if I start the laptop with it disconnected there is no signal on the external. The Displays screen shows the external, and thinks that it is active, but there is no signal being sent to it. This has been a persistent problem since 10.04 (I am now on 12.04.... each upgrade hoping something is improved). I should note that even when it works (starting with display connected), Displays still says the monitor is "unknown" (but it sends the signal). For the correct resolution to display, I have had to add a few xrandr lines for my monitor to my .xprofile file... otherwise resolution is limited to default 1024x768. So, resolution issues can be worked around, but the main issue is that the external doesn't get a signal without starting the machine with it connected. I have tried: adding i915.modeset=1 to grub (also i965.modeset=1 since someone posted that this helped even though lshw shows i915) adding following ppa and doing a dist-upgrade: sudo add-apt-repository ppa:xorg-edgers/ppa Here are the details: Laptop: Toshiba Tecra M10 lspci listings for video: 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a42] (rev 07) sudo lshw -C video listing: *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:ff400000-ff7fffff memory:e0000000-efffffff ioport:cff8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:ffc00000-ffcfffff "System Info" shows my graphics as the following Mobile Intel® GM45 Express Chipset x86/MMX/SSE2

    Read the article

  • SQLAuthority News – Download Whitepaper – A Case Study on “Hekaton” against RPM – SQL Server 2014 CTP1

    - by Pinal Dave
    In this new world of social media, apps and mobile devices, we are all now getting impatient. Automatic updates have spoiled few of our habits. When a new feature is released everybody wants to immediately adopt the feature and start using it. Though this is true in the world of apps and smart phones, but it is still not possible in the developer’s world. When new features are around, before we start using it, we need to spend quite a lots of time to understand it and test it. Once we are sold on the feature we refer the feature to our manager and eventually the entire organization makes decisions on upgrading to use the new feature. Similarly, when the new feature of In-Memory OLTP was announced, pretty much every SQL Server DBA wanted to implement that on their server. Through the implementation of the feature is not hard, it is not that easy as well. One has to do proper research about their own environment and workload before implementing this feature. Microsoft has recently released a Case Study on In-Memory OLTP feature. Here is the abstract from the white paper itself. I/O latch can cause session delays that impact application performance. This white paper describes the procedures and common I/O latch issues when migrating to Hekaton in SQL Server 2014. It also includes challenges that occurred during the migration and the performance analysis at different stages.  If you are going to implement In-Memory OLTP database, this is a good case study to refer. Download white paper from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • Blogger.com kills FTP

    - by Daniel Moth
    History (you can safely ignore) Back in 2002 I came across some (almost) free Linux/Apache space and set up my first manually-created HTML-based home page, which still exists: http://www.danielmoth.com/. In 2004 I wanted to have a blog that would be hosted on a sub-folder of my domain, and at the same time I did not want to mess with setting up a blog engine myself. I found the perfect solution in blogger.com, which offered a web interface for creating blog posts (and managing the pages' template) and it would then use FTP to upload HTML pages to my space (no server-side programming/installation required at all)! FTP feature dropped by blogger.com Unfortunately, along the way Google purchased blogger.com and a couple of months ago they announced that they decided to kill the FTP feature, and they are forcing customers using that feature to have their content hosted (in an opaque way) on Google's servers. Even though I prefer having my content on my own space, I would have considered moving it to Google's servers if I could host my blog in a sub-folder and preserve my full blog URL: http://www.danielmoth.com/Blog/ (including my home pages being hosted at the root of the domain). Sadly, that is not possible. What now So I decided to move my blog somewhere else. I'll document on the next few posts how I did that (inc. a tool I wrote) in case it helps someone else in the same situation and also as a reminder to me if I need to do something like this again in the future. Comments about this post welcome at the original blog.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >