Search Results

Search found 5957 results on 239 pages for 'suggest'.

Page 175/239 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Changing Your Design for Testability

    Sometimes I come across a way of putting something that it is pithy good, not Hallmark trite, but an impactful and concise way of clarifying a previously obscure concept. A recent one of these happy occurrences was when I was reading the excellent Art of Unit Testing by Roy Osherove. After going through the basics of why youd want to test code and how to do it, Roy confronts a frequent objection to having unit tests, that it ends up changing how you design your components: When we write unit tests for our code, we are adding another end user (the test) to the object model. That end user is just as important as the original one, but it has different goals when using the model.  The test has specific requirements from the object model that seem to defy the basic logic behind a couple of object-oriented principles, mainly encapsulation. [emphasis added by me] When I read this, something clicked for me. I used to find it persuasive that because unit tests caused you to change your design they were more disruptive than they were worth. The counter argument I heard is that the disruption was OK, because testable design was just obviously better. That argument was not convincing as it seemed like delusional arrogance to suggest that any one of type of design was just inherently better for the particular applications I was building. What was missing was that I was not thinking of unit tests as an additional and equal end user to my design. If I accepted that proposition, than it was indeed obvious that a testable design was better because now all users of my component would be satisfied. Have I accepted that proposition? Id phrase it slightly different. I find more and more that having unit tests helps me write better, less buggy code before it gets to production or QA. As I write more unit tests, it gets easier to see how to create testable components, so I dont feel like its taking me as much extra time up front. I pick and choose components that seem most likely to benefit from automated tests and it is working out nicely. If you already implement Test Driven Development, this whole post was probably a waste of your time <g> If you hate the idea of unit tests, well, probably not a great value prop for you either. However, if you are somewhere in between, at least take a minute and check out a sample chapter from Roys book at: http://www.manning.com/osherove/.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Curating projects of deceased friends

    - by Ant
    A very good friend of mine, and an avid programmer, recently passed away. He left nearly 40 projects on BitBucket. Most of them are public, but a few of them are marked as private. I've decided to take on curation duties for the projects rather than leave his work to disappear. If you have been in the same situation, what did you do? Did you open-source everything? Continue development? Delete it all? I'm very interested to hear other people's experiences. There are a few reasons why some of the projects are marked as private (private projects on BitBucket are visible only to invited users and the original creator): One of them is an iOS web app that was free in the app store. I've had to remove the app from the store as I'm shutting down his web sites as a favour to his widow. However, I've already made the app public under the GPL v3 (he was a big GPL supporter). One of them contains proprietary code. It can't be open-sourced. Others are very much work-in-progress. I don't know if he intended to make them into hosted, paid services or if he wanted to give the code away under an open-source licence when they were finished. Here's a list of the private projects: Some kind of living cell simulator that uses SBML along with Runge-Kutta and Euler algorithms to do... something. There's a fair amount of code here but I don't know what it does or how far along it is. No docs. An accountacy application; it seems to have a solid DB design behind it but there's little code on top of that. A website whose purpose is to suggest good restaurants. Built on yii. Seems to have a lot of code but I'd need to set up a WAMP stack to see how far along it is. A website intended to host memorials to people who suffered from the same problem he was. Built on Joomla. I'm not sure how much of the code is just Joomla and how much is custom; again, I'd need to get Joomla running to find out. I'd just introduced him to Mercurial and BitBucket. All of the private projects are single commits of codebases he wasn't using version control with/was previously using SVN. I don't have the SVN repositories so I can't see the commit logs.

    Read the article

  • High-Powered Sites for low Cost

    - by HighAltitudeCoder
    Ahh, I am experiencing the intimidation of my very first post - visible by the whole world. Ok, here goes.   This first post is nothing exceptional.  It is simply a recommendation based (fittingly, I suppose) upon the job search you may be gearing up for.  I find myself in this very situation right now.  And, I will take my own recommendation after posting this entry. Job-Seekers: To the left you will notice two links under "Recommended Learning".  I have found these links to be invaluable when it comes to re-tooling, re-familiarizing, or otherwise resharping my skills when looking for that next job. Often, you will find job-postings with the text, usually posted after a laborious list of qualifications indicating the company's desire to hire candidates who know what they are doing: "...Looking for a candidate who can hit the ground running...".  The interesting thing about this post to me is I've encountered many individuals who, after speaking and working with them for some time, I've realized are perfectly capable of hitting the ground running - and FAST.  But what if they speed off in the wrong direction? The next time you spearhead a major task in your job, ask yourself: Am I headed in the wrong direction?  There are many ways to do this.  In fact, I've found in this new field there are more tempting ways to steer your project in the wrong direction than there are good ones.  I don't want to suggest that every one of my posts will fall into the "right direction" category, however I do think a healthy dose of introspection of the pros and cons will always be beneficial before you set off. That said, allow me to expound on the previously mentioned links. These web sites are invaluable.  They demonstrate the capabilities of existing as well as new and upcoming tools available in several IDE's.  I've viewed many tutorials in LearnVisualStudio.NET, and only one or two so far in TrainingSpot, however I've been delighted in their simplicity and straightforward approach to proper usage of the particular tool or concept being discussed.  They have not (so far in my experience) demonstrated ways in which to use the tools that become cumbersome, impractical, or error-prone. Each website has step-by-step videos that can be paused, replayed, and most importantly, they are done in real time.  As the author is typing, the viewer gets to experience the coding experience from a first-person perspective, including syntax errors, unexpected behaviors, IDE setup idiosyncracies, everything.  A subtle value I've gained from these videos is that a certain degree of confusion and introspection is normal when working with new tools and exploring new paths.  They (as well as your own experience) are not to be feared, but enjoyed.  I highly recommend them. Good work, guys!

    Read the article

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

  • Tiling Problem Solutions for Various Size "Dominoes"

    - by user67081
    I've got an interesting tiling problem, I have a large square image (size 128k so 131072 squares) with dimensons 256x512... I want to fill this image with certain grain types (a 1x1 tile, a 1x2 strip, a 2x1 strip, and 2x2 square) and have no overlap, no holes, and no extension past the image boundary. Given some probability for each of these grain types, a list of the number required to be placed is generated for each. Obviously an iterative/brute force method doesn't work well here if we just randomly place the pieces, instead a certain algorithm is required. 1) all 2x2 square grains are randomly placed until exhaustion. 2) 1x2 and 2x1 grains are randomly placed alternatively until exhaustion 3) the remaining 1x1 tiles are placed to fill in all holes. It turns out this algorithm works pretty well for some cases and has no problem filling the entire image, however as you might guess, increasing the probability (and thus number) of 1x2 and 2x1 grains eventually causes the placement to stall (since there are too many holes created by the strips and not all them can be placed). My approach to this solution has been as follows: 1) Create a mini-image of size 8x8 or 16x16. 2) Fill this image randomly and following the algorithm specified above so that the desired probability of the entire image is realized in the mini-image. 3) Create N of these mini-images and then randomly successively place them in the large image. Unfortunately there are some downfalls to this simplification. 1) given the small size of the mini-images, nailing an exact probability for the entire image is not possible. Example if I want p(2x1)=P(1x2)=0.4, the mini image may only give 0.41 as the closes probability. 2) The mini-images create a pseudo boundary where no overlaps occur which isn't really descriptive of the model this is being used for. 3) There is only a fixed number of mini-images so i'm not sure how random this really is. I'm really just looking to brainstorm about possible solutions to this. My main concern is really to nail down closer probabilities, now one might suggest I just increase the mini-image size. Well I have, and it turns out that in certain cases(p(1x2)=p(2x1)=0.5) the mini-image 16x16 isn't even iteratively solvable.. So it's pretty obvious how difficult it is to randomly solve this for anything greater than 8x8 sizes.. So I'd love to hear some ideas. Thanks

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

  • Powerpoint missing from DCOM config

    - by Paul Prewett
    I have an application that automates the creation of powerpoint files in an ASP.NET environment. This requires that I install powerpoint on the server and also set permissions in the DCOM configuration snap-in (dcomcnfg) to give permissions to the launching user ([DOMAIN]\ASPNET in this case) to run the application. I have this setup running successfully on several Win2k3 machines. I am configuring my first Win2k8 machine and after installing powerpoint on the server, the "Microsoft Powerpoint Presentation" node in DCOM config is not showing up. Other installed Office apps are showing (Excel, Graph, etc...), just not Powerpoint. So when I attempt to run the application, I get an "Access denied" error, which is exactly what I would expect. The user doesn't have permission. Therefore, access denied. The specific error log entry is: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {91493441-5A91-11CF-8700-00AA0060263B} to the user [DOMAIN]\ASPNET I searched the entire list for the CLSID, too, thinking maybe the name wasn't loading properly. No dice. I also re-ran the setup program for Office thinking maybe there would be some option or something I unchecked in the custom setup options, but I saw nothing that looked helpful. I'm flummoxed. Can anyone out there suggest something to help me get Powerpoint to show up in the list of DCOM applications? Many thanks.

    Read the article

  • HYPER-V R2 Can not mount ISO from network location (UNC Path)

    - by Entity_Razer
    So, as the name suggest I'm trying to mount a ISO from a network share using the UNC path to a HYPER-V R2 Cluster. This is a pure Demo / test case setup with: 2x HYPER-V R2 1X NAS/iSCSI CSV Cluster Management is happening through the MMC with RSAT tools. So what i've done so far is: Set up the cluster and configure Quorum, add CSV Shares and disks, set up 1 Virtual Machine on the Hyper-1 node. What i'm trying to do is, you go to settings --- DVD Drive --- use network location ---- Pick ISO file and press "apply". Error I'm getting is either "User account does not have rights to mount iso". I changed that or stopped getting that message when I went to the HYPER-V Node settings and tabbed on: "Use Default Credentials Automatically". Now I stopped getting the "user does not have right..." message but I get the following: Error applying DVD Drive Changes Failed to remove device microsoft synthetic DVD Drive:" the specified network resource or device is no longer available" I've google'd the problem but am unable to find a solution. Anyone here able to help me out ? Much abbliged !

    Read the article

  • LiteSpeed enable Access-Control-Allow-Origin (no response header on CORS request)

    - by Joe Coder Guy
    Seriously, I can't find a single page discussing this for litespeed. Using this format in the htaccess "Header set Access-Control-Allow-Origin http://aSite.com" (and https) sends the setting in the http response header, but I still get the "XMLHttpRequest cannot load https://aSite.com/aFile.php. Origin aSite.com is not allowed by Access-Control-Allow-Origin" error when trying to access https from http origin. Also, I receive no response header for https, only that message shows up in Chrome. Is the server still blocking it even though I've sent the proper headers? I read elsewhere that it helps to add these terms Access-Control-Allow-Headers X-Requested-With Access-Control-Allow-Methods OPTIONS, GET, POST Access-Control-Allow-Headers Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control but I don't see these in my headers. Using these, my PHP files aren't even reached (because they register no errors or anything), so it looks like it comes from the server only, but what do I know. Thanks in advance! Update Since no response header, Prashant seems to suggest it's a server issue in his error since it worked on another server. http://stackoverflow.com/questions/11953132/no-response-obtained-while-implementing-cors Anyone know how to flip this switch? Headers work now Bad litespeed format. Should look like this. Still being denied though. Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Methods OPTIONS Header set Access-Control-Allow-Methods GET Header set Access-Control-Allow-Methods POST Header set Access-Control-Allow-Headers Content-Type Header set Access-Control-Allow-Headers Depth Header set Access-Control-Allow-Headers User-Agent Header set Access-Control-Allow-Headers X-File-Size Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Headers If-Modified-Since Header set Access-Control-Allow-Headers X-File-Name Header set Access-Control-Allow-Headers Cache-Control

    Read the article

  • Cisco ASA and SixXS IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • Managing scalability and availability with two servers running Apache Httpd, Apache Mina and MySQL

    - by celalo
    Hello, I am not a developer and I don't much experience with scalable server architectures. But I am in need of a highly available and scalable system for one of my projects. There is going to be two servers I am going to use for the time being. Both with 4 core CPUs and 8 GB RAM with RAID structures running CentOS 5.4. I will also have feature called "Failover IP" which enables to direct an IP address to another server within short time. The applications which will be run on the servers: There is going to be a Java application based on Apache Mina server for handling TCP requests from some hundreds of network devices where the devices are going to send request as much as one request per minute. Handling those requests, includes parsing the requests and inserting a few rows to the Database. Parsing requests before inserting data to the DB does take neglectable time. There is going to be MySQL server, as I stated above. Also there is going to be a PHP web application running on Apache Httpd Server which uses the same DB with the Java application. What I wish to have is to make use of those two servers at the most. I was imagining to have the servers identical, sharing the work load. MySQL could be a cluster maybe? And if some application fails or the whole machine goes down, the other will continue serving the requests seamlessly. Reminding that a "Failover IP" feature will be available for me to take advantage of. Also, It should be kept in mind that number of servers could increase in time, to meet the demand. What can you suggest? Which kind of tools I can make use of? Which kind of monitoring software (paid/unpaid) I have? Thanks in advance.

    Read the article

  • How to disable windows server 2008 timestamp response

    - by Cal
    Posted this question on stackoverflow but then got instructed to post it here: I was using Rapid7's Nexpose to scan one of our web servers (windows server 2008), and got a vulnerability for timestamp response. According to Rapid7, timestamp response shall be disabled: http://www.rapid7.com/db/vulnerabilities/generic-tcp-timestamp So far I have tried several things: Edit the registry, add a "Tcp1323Opts" key to HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, and set it to 0. http://technet.microsoft.com/en-us/library/cc938205.aspx Use this command: netsh int tcp set global timestamps=disabled Tried powershell command: Set-netTCPsetting -SettingName InternetCustom -Timestamps disabled (got error: Set-netTCPsetting : The term 'Set-netTCPsetting' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.) None of above attempts was successful, after re-scan we still got the same alert. Rapid7 suggested using a firewall that's capable of blocking it, but we want to know if there is a setting on windows to achieve it. Is it through a specific port? If yes, what is the port number? If not, could you suggest a 3rd party firewall that is capable of blocking it? Thank you very much.

    Read the article

  • blu-ray archiving in vmware ESXi 4

    - by spacecadet77
    Hi, I need some advice about using blu-ray writer for archiving data on vmware ESXi 4. At office we have IBM System x3400 Tower server with ESXi 4 hipervisor and OpenSuse and CentOS GNU/Linux system as guests. Will blu-ray writer work in this setup, and if it will is there any particular model you can suggest. Best regards IBM System x3400 Tower server specification: 1x Intel Quad-Core Xeon E5410 2.33GHz/ 12MB/ 1333MHz (2x CPU max) Intel 5000P chipset, 2x 1GB PC2-5300 DDR2 667MHz SDRAM ECC Chipkill (32GB max) 2x4GB (2x2GB) PC2-5300 CL5 ECC DDR2 FBDIMM (x3400, x3550, x3650) SAS/SATA Hot-Swap Open Bay (0xHDD std, 4xHDD max, 8xHDD optional) ServeRAID 8K dual channel SAS/SATA controller (RAID 0,1,1E,10,5,6, 256MB, Battery Backup) Graphics ATI® RN50(ES1000) 16MB DDR, CD-RW/DVD Combo no FDD GigaEthernet, Tower with Power Supply 835W (opt Redudant) Slot 1: half-length, PCI-Express x8(x4 electrical) Slot 2: full, PCI-Express x8 Slot 3: full, PCI-Express x8 Slot 4: full, 64-bit 133MHz 3.3v PCI-X Slot 5: full, 64-bit 133MHz 3.3v PCI-X , Slot 6: half-length, 32-bit 33MHz 5.0v PCI ports: 4x USB (Vers 2.0), 2x PS/2, parallel, 2x serial (9-pin), VGA, RJ-45 (ethernet ), RJ-45 (sys mgm) HDD 4 x TB 7200rpm / Serial ATA II 3.0Gb/s / 16MB, RoHS

    Read the article

  • How to shrink Windows 7 boot partition with unmovable files.

    - by Alex Che
    I have just bought HP laptop with Windows 7 (64 bit). It has 500 GB HDD with three partitions: small hidden system partition, 12 GiB HP recovery partition, and 450 GiB C: boot partition. I would like to split this large C: partition into two partitions, leaving only 100 GiB for system, and giving the rest to new data partition. Although Windows built-in Disk Management utility has an option to shrink the bootable partition, it only allows me to shrink it roughly by half, even though only 20 GiB on the partition is used. As far as I understand, system unmovable files lie in the middle of the partition, preventing Disk Management utility to do what I want. And since new HP laptops don't come with OS installation disks (they only allow you to create recovery disks youself), I can't just repartition HDD and then reinstall OS. So, is there any way to shrink C: bootable partition and preserve Windows 7 working? P.S.: I have tried to use 3rd party GParted utility, and after shrinking the partition Windows 7 stopped booting with BSOD. System recovery didn't work, and I had to do factory recover. Since this is a long process, I would like to avoid doing it again :) So, please, suggest only proven solutions.

    Read the article

  • Video and LAN Cards are not recognized on Intel DG41RQ MB

    - by artyom
    Hello, My 32 bit Debian Etch with Etch-n-Half kernel 2.6.24 and Etch-n-half video driver xserver-xorg-video-intel 2.2.1-1~etchnhalf2 does not recognize my on-board LAN and Video. I can work only with VESA display. And additional PCI LAN card (that I wanted to use for Cross-X-Cabel). output of lspci 00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 4 Series Chipset PCI Express Root Port (rev 03) 00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10 According to MB Spec: Video: Intel G41 Express Chipset Graphics and Memory Controller Hub Graphics (GMCH) LAN: Realtek 8111D Gigabit Ethernet Controller LAN Support When I run dpkg-reconfigure xserver-xorg is suggest i810 driver but X does not start, it does not start with intel driver as well; only vesa. LAN I even can't find in lspci... the last Realtek card is external one 100MB card. Where to start from?

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Remote RIB iLO on Proliant via RIBCL

    - by Wudang
    I'm trying to automate a process for our Ops. The process requires that some windows servers running on blades are shut down, left down for a few hours, the restarted when some other processes complete. This is done by an op logging on to each blade's iLO web interface to stop and start. I've been trying to automate this with HP's cpqlocfg program with partial success. I can issue the GET_POWER, GET_USER_INFO, etc commands but SET_HOST_POWER fails in a specific way. Using the cpqlocfg GET_EVENTLOG command I can see the events XML login and the power comand being issued from the iLO interface but then nothing happens. Some hints from googling suggest ACPI isn't configured properly but I can't find any hits on how to verify this. Am I even using the right command? There's also a few other options like PRESS_PWR_BUTTON etc. Problem is I have nowhere to test this, all I can do at the moment is give a script to ops and ask them to try it as 4am on a Sunday when they try the proc. The shutdown is trivial as I can use the windows "shutdown" command, it's the power on that I need help on. Anyone done this? I'd tag this "rib ribcl ilo" but lack the rep points, sorry.

    Read the article

  • Remote Desktop Mobile mangles barcodes coming from scanner

    - by sfonck
    We have an application here using handhelds to scan barcodes. These handhelds are actually making a remote desktop session towards a server where the application runs. Works fine. Now we have bought some new Motorola MC55's running 'Windows Mobile 6.1 Classic', and when using the application over remote desktop: it mangles the characters of the barcodes.... I already tried following things: When scanning a barcode on the MC55 itself it is displayed correctly When scanning a barcode via the remote desktop into a notepad session it is incorrect. Played with all options of the 'Remote Desktop Mobile' - no result Disabled 'autocorrect' and 'suggest words when entering text' on the input settings - no result The strange things is: a barcode which consists of only numbers gets scanned correctly the mangled characters comes through in lower case For some codes \t is mangled in between (should normally be entered after the barcode) e.g.: 'PERIN4' becomes 'ERINp4' 'MGZB' becomes 'GZB m' 'BAK664' becomes 'AK664 b' 'MAGBFA01' becomes 'AGBFmA01' '5021879949500' gets scanned correctly Final solution: Suppllier of the handhelds said the handheld was sending the characters too fast over the remote desktop connection. They changed the handheld to wait for 50ms between sending each character, which produced correct results right now. Scanning a barcode became somewhat slower but it's almost not remarkable to endusers.

    Read the article

  • Fedora 13 - No module named yum

    - by drozzy
    This is driving me bananas! After a recent update in Fedora 13 64bit, my yum is gone: $> yum update There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. I tried looking for an RPM yum package - to install yum. I went to the Fedora site: http://fedoraproject.org/wiki/Tools/yum Call me blind but I cannot find it anywhere on that page! Most of the solutions suggest repairing yum... with yum! But I don't have yum? Yum yum yum? :< Any help? Here are some outputs for rpm commands: $> rpm -ql python | grep "site-packages$" /usr/lib/python2.6/site-packages /usr/lib64/python2.6/site-packages $> rpm -ql yum | grep "site-packages/yum$" /usr/lib/python2.6/site-packages/yum

    Read the article

  • APC fragmention woes on Apache AWS EC2 Small instance with WordPress and W3TC

    - by two7s_clash
    AWS EC2 Small instance, Apache 2 running WordPress and W3TC. Within an hour, my APC fragmentation hits 100%. My APC settings are: apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 100M apc.optimization = 0 apc.num_files_hint = 512 apc.user_entries_hint = 1024 apc.ttl = 7200 apc.user_ttl = 7200 apc.gc_ttl = 3600 apc.cache_by_default = 1 apc.use_request_time = 1 apc.filters = "apc\.php$" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 2M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.rfc1867 = 0 apc.rfc1867_prefix = "upload_" apc.rfc1867_name = "APC_UPLOAD_PROGRESS" apc.rfc1867_freq = 0 apc.localcache = 0 apc.localcache.size = 256M apc.coredump_unmap = 0 apc.stat_ctime = 0 apc.canonicalize = 1 apc.lazy_functions = 0 apc.lazy_classes = 0 /etc/php.d/apc.ini More poop can be seen here. Mostly cribed settings from here. The shm was meant to be whittled down from such a high value after some observation, but apparently such a large value isn't even high enough.... I found an similar question/answer here. I do have some virtual hosts setup, but they aren't being touched much at all. Having users logged into the admin panel of WP does make things worse, but that's certainly not the main culprit. The question asker seems to suggest that it turns out W3TC is probably causing the problem, which the plugin author seems to agree with, but there aren't any helpful details beyond that. Why is it causing the problem? Do I just take it for now and turn off object caching with APC? Is there nothing I can do? Does having it turned on without being used for object caching actually help anything? Would memcache be an ok substitute just for object caching here? Finally, maybe I just shouldn't worry so much about the fragmentation?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >