Search Results

Search found 3315 results on 133 pages for 'magic packet'.

Page 108/133 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • Moving from Silverlight 4 Beta to RC - Part 1

    The other day I had finished up my Task-It Webinar, written a few blog posts, and knew the time had come to move from my Silverlight 4 Beta environment up to the latest RC (release candidate) bits that were released last Monday. What disappointed me when I went to the Silverlight 4 Information Page is that it told me what to install, but not what to uninstall first. Uninstalling I'm not entirely sure if I had to uninstall anything, or if installing the new stuff would just work, but in poking around the web I found posts stating that you must uninstall the following items first. Unfortunately I'm going by memory here and have not been able to find my way back to the magic page I found in the myriad of posts that I went through: Microsoft VisualStudio Beta 2 Microsoft .NET Framework 4 Extended (apparently this must be done *before* the next one) Microsoft .NET Framework Client Profile Microsoft Silverlight 4 Tools for Visual Studio 2010 WCF RIA Services Preview for Visual Studio 2010 While I was at it, I removed a bunch of other stuff, like Blend 3, Blend 4, the SDK's associated with them, and a bunch other stuff that was old. Of course, I didn't really want/need to keep any Silverlight 3 stuff around as I am developing Task-It in Silverlight 4. If I need a Silverlight 3 environment at some point I'll set it up in a virtual environment. NOTE: One thing that I did not uninstall is the Microsoft Silverlight 4 Toolkit November 2009. The reason is that they have not released the March version yet, so if you uninstall this, youll end up having to reinstall it. Installing OK, now that I had all of that old stuff off my machine, now it was time to get the new stuff. For this part I liked Tim Heuer's post, A Guide to What has Changed in Silverlight 4 RC better than the Silverlight 4 Information Page. VisualStudio 2010 RC - I installed Ultimate, but you may not need that version. No harm, it's free for now anyway. I downloaded the .exe and the 3 .rar files, then ran the .exe. I then extracted the contents of the ISO (using WinZip) to a new directory, and now had Setup.exe, a bunch of .cab files and some other assorted stuff. I simply ran Setup.exe and chose custom install (only because I wanted to uncheck Visual C++...I don't really have a need for that) Silverlight 4 Tools for Visual Studio 2010 - As mentioned in Tim's blog post, this installs this installs Silverlight developer runtime, SDK, tools, and this installs Silverlight developer runtime, SDK, tools, and WCF RIA Services. WCF RIA Services Toolkit March 2010 - I'm not sure if/when I'll need any of this stuff, but no harm in installing it anyway. Expression Blend 4 beta - Only if you plan to use Blend, which I do. Windows Phone Developer Tools - Only if you are interested in playing with Windows Phone 7 development. Wrap Up Hopefully I got those steps right. If anyone finds anything I've missed, please just add a comment to this post and I'll update it accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • OpenGl / C++ and some strange light problem on half board

    - by mlodziaszka
    I have some problem with lights in my opengl "game". I have board with is square (-50,50), (50, 50), (50, -50), (-50,-50) x and z since y doesn't matter at all. I tried to make something like flashlight its moving and rotating with camera (me), but when i try to rotate more then 90 degree to left or right it just give diffrend light: http://imageshack.us/photo/my-images/688/lightij.jpg/ (left is spotlight, right point light) There is also a point light in the middle, but its working strange(not like a pointlight) it shines only on half of the board from (-50,50), (50, 50), (50, 0), (-50,-0) x and y: Link to my repo where u can find game exe in download and full code in source: https://bitbucket.org/mlodziaszka/my_game All more fragments of light: float gl_amb[] = { 0.2f, 0.2f, 0.2f, 1.0f }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, gl_amb); glEnable(GL_LIGHTING); // Wlaczenie oswietlenia glShadeModel(GL_SMOOTH); // Wybor techniki cieniowania glEnable(GL_LIGHT0); // Wlaczenie 0-go zrodla swiatla glEnable(GL_LIGHT1); Cubes parametri: float m1_amb[] = { 1.0f, 0.0f, 0.0f, 1.0f }; float m1_dif[] = { 1.0f, 0.0f, 0.0f, 1.0f }; float m1_spe[] = { 1.0f, 0.0f, 0.0f, 1.0f }; glMaterialfv(GL_FRONT, GL_AMBIENT, m1_amb); glMaterialfv(GL_FRONT, GL_DIFFUSE, m1_dif); glMaterialfv(GL_FRONT, GL_SPECULAR, m1_spe); glMaterialf(GL_FRONT, GL_SHININESS, 50.0f); Texture parametri: float m1_amb[] = { 1.0f, 1.0f, 1.0f, 1.0f }; float m1_dif[] = { 1.0f, 1.0f, 1.0f, 1.0f }; float m1_spe[] = { 1.0f, 1.0f, 1.0f, 1.0f }; glMaterialfv(GL_FRONT, GL_AMBIENT, m1_amb); glMaterialfv(GL_FRONT, GL_DIFFUSE, m1_dif); glMaterialfv(GL_FRONT, GL_SPECULAR, m1_spe); glMaterialf(GL_FRONT, GL_SHININESS, 0.0f); glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE ); Light0: //with some magic sn't working anyway float l0_amb[] = { 0.2f, 0.2f, 0.2f, 1.0f }; float l0_dif[] = { 1.0f, 1.0f, 1.0f, 1.0f }; float l0_spe[] = { 1.0f, 1.0f, 1.0f, 1.0f }; float l0_pos[] = { g_Camera.m_vPosition.x, g_Camera.m_vPosition.y, g_Camera.m_vPosition.z, 1.0f }; float temp = 0.0f, temp2 = 0.0f, temp3 = 0.0f; if(g_Camera.m_vView.z < g_Camera.m_vPosition.z) { temp = g_Camera.m_vView.x - g_Camera.m_vPosition.x; temp2 = g_Camera.m_vView.z - g_Camera.m_vPosition.z; } else { temp = g_Camera.m_vView.x - g_Camera.m_vPosition.x; temp2 = g_Camera.m_vView.z - g_Camera.m_vPosition.z; } float l0_pos1[] = {temp, 0.0f, temp2}; //float l0_pos1[] = {-1.0f, 0.0f, -1.0f}; glLightfv(GL_LIGHT0, GL_AMBIENT, l0_amb); glLightfv(GL_LIGHT0, GL_DIFFUSE, l0_dif); glLightfv(GL_LIGHT0, GL_SPECULAR, l0_spe); glLightfv(GL_LIGHT0, GL_POSITION, l0_pos); glLightf (GL_LIGHT0, GL_SPOT_CUTOFF, 15.0f); glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, l0_pos1); Light1: float l1_amb[] = { 0.2f, 0.2f, 0.2f, 1.0f }; float l1_dif[] = { 1.0f, 1.0f, 1.0f, 1.0f }; float l1_spe[] = { 1.0f, 1.0f, 1.0f, 1.0f }; float l1_pos[] = { 0.0f, 0.0f, 0.0f, 1.0f }; glLightfv(GL_LIGHT1, GL_AMBIENT, l1_amb); glLightfv(GL_LIGHT1, GL_DIFFUSE, l1_dif); glLightfv(GL_LIGHT1, GL_SPECULAR, l1_spe); glLightfv(GL_LIGHT1, GL_POSITION, l1_pos); I know that way I made this very old, but for now i want to keep this like that. I wouldbe realy gratefull if someone can tell me what is wrong with my lights xD full code: link up ^^

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • ReSharper 8.0 EAP now available

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/28/resharper-8.0-eap-now-available.aspxJetbrains have just released |ReSharper 8.0 Beta on their Early Access |Programme at http://www.jetbrains.com/resharper/whatsnew/?utm_source=resharper8b&utm_medium=newsletter&utm_campaign=resharper&utm_content=customersResharper 8.0 comes with the following new features:Support for Visual Studio 2013 Preview. Yes, ReSharper is known to work well with the fresh preview of Visual Studio 2013, and if you have already started digging into it, ReSharper 8.0 Beta is ready for the challenge.Faster code fixes. Thanks to the new Fix in Scope feature, you can choose to batch-fix some of the code issues that ReSharper detects in the scope of a project or the whole solution. Supported fixes include removing unused directives and redundant casts.Project dependency viewer. ReSharper is now able to visualize a project dependency graph for a bird's eye view of dependencies within your solution, all without compiling anything!Multifile templates. ReSharper's file templates can now be expanded to generate more than one file. For instance, this is handy for generating pairs of a main logic class and a class for extensions, or sets of partial files.Navigation improvements. These include a new action called Go to Everything to let you search for a file, type or method name from the same input box; support for line numbers in navigation actions; a new tool window called Assembly Explorer for browsing through assemblies; and two more contextual navigation actions: Navigate to Generic Substitutions and Navigate to Assembly Explorer.New solution-wide refactorings. The set of fresh refactorings is headlined by the highly requested Move Instance Method to move methods between classes without making them static. In addition, there are Inline Parameter and Pull Parameter. Last but not least, we're also introducing 4 new XAML-specific refactorings!Extraordinary XAML support. A plethora of new and improved functionality for all developers working with XAML code includes dedicated grid inspections and quick-fixes; Extract Style, Extract, Move and Inline Resource refactorings; atomic renaming of dependency properties; and a lot more.More accessible code completion. ReSharper 8 makes more of its IntelliSense magic available in automatic completion lists, including extension methods and an option to import a type. We're also introducing double completion which gives you additional completion items when you press the corresponding shortcut for the second time.A new level of extensibility. With the new NuGet-based Extension Manager, discovering, installing and uninstalling ReSharper extensions becomes extremely easy in Visual Studio 2010 and higher. When we say extensions, we mean not only full-fledged plug-ins but also sets of templates or SSR patterns that can now be shared much more easily.CSS support improvements. Smarter usage search for CSS attributes, new CSS-specific code inspections, configurable support for CSS3 and earlier versions, compatibility checks against popular browsers - there's a rough outline of what's new for CSS in ReSharper 8.A command-line version of ReSharper. ReSharper 8 goes beyond Visual Studio: we now provide a free standalone tool with hundreds of ReSharper inspections and additionally a duplicate code finder that you can integrate with your CI server or version control system.Multiple minor improvements in areas such as decompiling and code formatting, as well as support for the Blue Theme introduced in Visual Studio 2012 Update 2.

    Read the article

  • The Social Content Conundrum

    - by Mike Stiles
    Here’s the social content conundrum: people who are not entertainers are being asked to entertain. Despite a world of skilled MBAs, marketing savants, technological innovators, analysts, social strategists and consultants, every development in social for brands keeps boomeranging right back to the same unavoidable truth. Success hinges on having content creators who know how to entertain the target audience. You can’t make this all about business-processes. You can’t make this all about technology, though data is critical and helps inform content. This is about having human beings who know the audience, know what they’d love to see, and can create the magic that will draw and hold them. Since showing up in the News Feed is critical for exposition and engagement, and since social ads primarily serve to amplify content that’s performing well, I’m comfortable saying content creators are becoming exponentially recruited and valued. They will no longer be commodities. They’ll be your stars. Social has fundamentally changed the relationship between brand and consumer. No longer can the customer be told to sit down, shut up, and listen to our ads. It’s now all about what consumers are willing to watch or read. Their patience for subjecting themselves to material they aren’t interested in is waning. Therefore, brands must now be producers of entertainment and information content, not merely placers of ads within someone else’s content. Social has given you a huge stage, with an audience sitting out there waiting to see what you’re going to do. What are you putting on that stage? For most corporate environments, entertaining is alien. It’s risky and subjective. Most operate around two foundational principles: control and fear. To entertain and inform with branded content, some control has to go. You control the product. Past that, control is being transferred into the hands of the consumer. The “fear first” culture also has to yield. If you strive to never make waves, you will move absolutely nothing. Because most corporations don’t house entertainers, they must be found then trusted. They’re usually a little weird. The ideas they’ll bring may seem “out there.” But like any business professional, they’ve gone through the training and experiences that make them uniquely good at what they do, even if you don’t quite understand them. It’s okay. It’s what the audience thinks that matters. Get it right, and you’ll be generating one ambassador after another who’s proud to be identified with the brand and will regularly consume and share your content. Entertainment entities are able to shape our culture and succeed beyond their wildest dreams by being beholden to one thing…what the public likes and wants. When brands put the same emphasis on crowd-pleasing content, they too will enjoy brand fame the likes of which they’ve never seen. The stage is yours. Now get out there and go for that applause.

    Read the article

  • JTF Tranlsation Festival 2011

    - by user13133135
    ?????????????????????? (MT) ??????????????????????? JTF ????????????????????????????????????????????????? ???5??!???21?JTF???????? ? ??:2011?11?29?(?)9:30~20:30(??9:00) ? ??:??????????(????)?(??) ? ??:(?)?????? ??:JTF?????????? ? http://www.jtf.jp/jp/festival/festival_top.html ????????????????????????????????MT ????????????????????????????????????????????????? 90 ???!??(!?)?????????????????????????????????????????????????????????????????????????????? ????????????????? http://www.jtf.jp/jp/festival/festival_program.html#koen_04 ?????????????????????????????? English:  It's been a while since the last post... I have been working on machine translation (MT) and post editing (PE) for Japanese.  Last year was my first step in MT+PE area, and I would take this year as an advanced step.  I plan to talk over Post editing 2011 (Advanced Step) on November 27 at JTF Translation Festival.  ?5 days before application due? 21st JTF Translation Festival ? Date:Nov 29, 2011 Tuesday 9:30~20:30(Gate open: 9:00) ? Place:Arcadia Ichigaya Tokyo ? http://www.jtf.jp/jp/festival/festival_top.html In this session, I would like to expand the thought on "how to best utilize MT and PE" either from the view of Client and Translator.  I will show some examples of post editing as a guideline to know what is the best way and most effective way to do post-edit for Japanese.  Also, I will discuss what is the best practice for MT users (Client). The session lasts 90 minutes... sound a little long for me, but I want to spend more time for discussion than last year.  It would be great to exchange thought or experiences about MT and PE.  What is your concerns or problems in the daily work with MT ?  If you have some, please bring them to my session at JTF Translation Festival.  Here is my session details (Japanese): http://www.jtf.jp/jp/festival/festival_program.html#koen_04 Here is the outline of my session: What is the advantage of MT ? Does it solve all the problems about cost, resource, and quality ?  Well, it is not a magic.  So, you cannot expect all at once.  When you have a problem, there are 3 options... 1. Be patient and wait until everything is ready, 2. Run a workaround using anything available now, 3. Find out something completely new and spend time and money. This time, I will focus Option 2 - do something with what we already have.  That is, I will discuss how we can best utilize MT in our daily business.  My view is two ways: From Client point of view, and From Translator point of view Looking forward to meeting many people and exchanging thoughts and information!

    Read the article

  • Need help understanding WPF MeasureOverride and ArrangeOverride infinity and 0 sizes

    - by Scott Bilas
    I'm trying to build a simple panel that contains one child and will snap it to nearest grid sizes. I'm having a hard time figuring out how to do this by overriding MeasureOverride and ArrangeOverride. Here's what I'm after: I want my panel to tell its owner that (a) it wants to be as large as possible, then (b) when it finds out what that size is, it will size itself and its child UIElement according to the nearest smaller snap point. So if we're snapping to 10's, and I can be in a region no bigger than 192x184, the panel will tell its parent container "my actual size is going to be 190x180". That way anything bordering my control will be able to align to its edges, as opposed to the potential space. When I put my panel inside of a Grid, I get either 0 or PositiveInfinity (I forget) for the incoming size in the overrides, but what I need to know is "how big can my space actually get?" not infinities.. Part of the problem I think is what WPF considers magic values of PositiveInfinity and 0 for size. I need a way to say, via MeasureOverride "I can be as big as you will allow me" and in ArrangeOverride to actually size to the snapped size. Or am I going about this the completely wrong way? Measuring and arranging looks very complicated, just from wandering around a little in the code for the standard panels in Reflector.

    Read the article

  • Problem with jqGrid in Internet Explorer 8

    - by Dave Swersky
    I have developed an ASP.NET MVC (version 2 RC) application with a ton of jqGrids. It's working like a champ in Firefox, but I've discovered a problem in IE8. The "Main View" grids can be filtered by a search box or one of a few dropdowns above the grid. I use some javascript to reset the url for the grid, then trigger a reload, thusly: function filterByName(filter) { if (filter == 'All') { $('#list').setGridParam({ url: 'Application/GetApplications' }); $('#list').trigger("reloadGrid"); } else { $('#list').setGridParam({ url: 'Application/GetAppByName/' + filter + '/' }); $('#list').trigger("reloadGrid"); } } This works like magic in Firefox, but I'm getting an HTTP 400 Bad Request when I do this in IE8. The IE8 client-side debugger is like flint and tinder compared to Firebug's flamethrower, so I'm not having much luck figuring out why it breaks in IE8. Has anyone seen this? Also, the jqGrid "trigger" method here is swallowing the AJAX exception. Is there a way to get it to bubble up so I can get to the exception details? UPDATE: The problem was with the syntax in my "onchange" event for the dropdowns. I was using: onchange="filterByMnemonic($('#drpMnemonic')[0].value); Which Firefox apparently doesn't mind but IE sees that as nuthin'. This, however, works in both browsers: onchange = "filterByMnemonic($('#drpMnemonic > option:selected').attr('value'));"

    Read the article

  • Correctly calling setGridWidth on a jqGrid inside a jQueryUI Dialog

    - by Dan
    I have a jQueryUI dialog (#locDialog) which has a jqGrid ($grid) inside it. When the Dialog opens (initially, but it gets called whenever it opens), I want the $grid to resize to the size of the $locDialog. When I do this initially, I get scrollbars inside the grid (not inside the dialog). If I debug the code, I see the width of the $grid is 677. So, I call setGridWidth() again and check the width and now I have 659, which is 18px less, which is the size of the scroll area for the jqGrid (Dun-dun-dun..) When I rezie the dialog, I resize the grid as well, and everything is happy - no scrollbars, except where necessary. My dialog init code: $locDialog = $('#location-dialog').dialog({ autoOpen: false, modal: true, position: ['center', 100], width: 700, height:500, resizable: true, buttons: { "Show Selected": function() {alert($('#grid').jqGrid('getGridParam','selarrrow'));}, "OK": function() {$(this).dialog('close');}, "Cancel": function() {$(this).dialog('close');} }, open: function(event, ui) { $grid.setGridHeight($(this).height()-54); // No idea why 54 is the magic number here $grid.setGridWidth($(this).width(), true); }, close: function(event, ui) { }, resizeStop: function(event, ui) { $grid.setGridWidth($locDialog.width(), true); $grid.setGridHeight($locDialog.height()-54); } }); I am curious if anyone has seen this before. Really, it isn't the end of the world if I initially have unnecessary scrollbars at first, but it is just odd that when I call setGridWidth initially, it doesn't take into account the scroll area of 18px. As far as the magical number 54, that is the number I had to subtract from the height of the dialog value to get the grid to render without unnecessary scrollbars.

    Read the article

  • Geting SelectList to MVC view using AJAX/jQuery

    - by Chris
    Hi all. I have a C# MVC application which is populating a dropdown based on a date selected. Once the date is selected I am sending it to an action via AJAX/jQuery. The action gets a list of items to return for that date. Here is where my problem is. I have done it previously where I render a partial view from the action and pass it the SelectList as the model. However, I really just want to do it inline in the original view, so I'm hoping there is some way I can return the SelectList and from there do some magic Javascript/JQuery to put it into a dropdown. Has anybody ever done this before? If so, what do I on the client end after calling the load() to return the SelectList? I've done something like this previously, when I was just returning a string or other value to be rendered as straight text: $("#returnTripRow").load("/Trip.aspx/GetTripsForGivenDate?date=" + escape(selection)); But I'm not sure how to intercept the data and morph it into am Html.DropDown() call, or equivalent. Any ideas? Thanks, Chris

    Read the article

  • OAuth with Google Reader API using Objective C

    - by Dylan
    I'm using the gdata OAuth controllers to get an OAuth token and then signing my requests as instructed. [auth authorizeRequest:myNSURLMutableRequest] It works great for GET requests but POSTs are failing with 401 errors. I knew I wouldn't be able to remain blissfully ignorant of the OAuth magic. The Google Reader API requires parameters in the POST body. OAuth requires those parameters to be encoded in the signature like they were on the query string. It doesn't appear the gdata library does this. I tried hacking it in the same way it handles the query string but no luck. This is so difficult to debug as all I get is a 401 from the Google black box and I'm left to guess. I really want to use OAuth so I don't have to collect login credentials from my users but I'm about to scrap it and go with the simpler cookie based authentication that is more mature. It's possible I'm completely wrong about the reason it's failing. This is my best guess. Any suggestions for getting gdata to work or maybe an alternative iphone friendly OAuth library?

    Read the article

  • Using the Rijndael Object in VB.NET

    - by broke
    I'm trying out the Rijndael to generate an encrypted license string to use for our new software, so we know that our customers are using the same amount of apps that they paid for. I'm doing two things: Getting the users computer name. Adding a random number between 100 and 1000000000 I then combine the two, and use that as the license number(This probably will change in the final version, but I'm just doing something simple for demonstration purposes). Here is some sample codez: Private Sub Main_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim generator As New Random Dim randomValue As Integer randomValue = generator.Next(100, 1000000000) ' Create a new Rijndael object to generate a key ' and initialization vector (IV). Dim RijndaelAlg As Rijndael = Rijndael.Create ' Create a string to encrypt. Dim sData As String = My.Computer.Name.ToString + randomValue.ToString Dim FileName As String = "C:\key.txt" ' Encrypt text to a file using the file name, key, and IV. EncryptTextToFile(sData, FileName, RijndaelAlg.Key, RijndaelAlg.IV) ' Decrypt the text from a file using the file name, key, and IV. Dim Final As String = DecryptTextFromFile(FileName, RijndaelAlg.Key, RijndaelAlg.IV) txtDecrypted.Text = Final End Sub That's my load event, but here is where the magic happens: Sub EncryptTextToFile(ByVal Data As String, ByVal FileName As String, ByVal Key() As Byte, ByVal IV() As Byte) Dim fStream As FileStream = File.Open(FileName, FileMode.OpenOrCreate) Dim RijndaelAlg As Rijndael = Rijndael.Create Dim cStream As New CryptoStream(fStream, _ RijndaelAlg.CreateEncryptor(Key, IV), _ CryptoStreamMode.Write) Dim sWriter As New StreamWriter(cStream) sWriter.WriteLine(Data) sWriter.Close() cStream.Close() fStream.Close() End Sub There is a couple things I don't understand. What if someone reads the text file and recognizes that it is Rijndael, and writes a VB or C# app that decrypts it? I don't really understand all of this code, so if you guys can help me out I will love you all forever. Thanks in advance

    Read the article

  • How can we leverage NSAppearance?

    - by Brad Allred
    I was reading the Cocoa documentation and stumbled across some new features in the 10.9 API. From the docs I gather that the NSAppearance class and a related protocol NSAppearanceCustomization Appear to be a means of customizing the appearance of NSView and its descendants. An NSAppearance object represents a file that specifies a standard or custom appearance that applies to a subset of UI elements in an app. An app can contain multiple appearance files and—because NSAppearance conforms to NSCoding—you can use Interface Builder to assign UI elements to an appearance. Typically, you customize a window by using Xcode to create an appearance file that contains the views you want to customize and the custom art that should be applied to them. Xcode transforms the file’s art content into a runtime format that AppKit can draw when the specified views are displayed. Well that all sounds neat and promising, but nowhere in the documentation can I find what an appearance file is or how to make one. Google searches are coming up empty other than for the thin documentation I have already read. I do see that UIKit has a similar sounding UIAppearance class, but from what I can tell this is not a straight port of the UIKit class. Does anybody know how to make this magic "appearance file" and what exactly we can do with it?

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • VS2010 Text Editor: How do I set the default formatting style for inline code blocks?

    - by nukefusion
    I've got a problem with the formatting of inline code blocks within the VS2010 text editor and wonder if anyone else has had similar problems and found the 'magic' setting I'm looking for. I'm working my way through tutorials in an MVC book. Whenever I add some inline code blocks to a view I want them formatted like so: <% foreach (var link in Model) { %> <a href="<%=Url.RouteUrl(link.RouteValues)%>"> <%=link.Text%> </a> <% } %> What I'm actually getting is this (auto-formatted by the IDE when I finish writing the code): <% foreach (var link in Model) { %> <a href="<%=Url.RouteUrl(link.RouteValues)%>"> <%=link.Text%> </a> <% } %> It's pretty irritating. Any ideas on how I can instruct the IDE to leave my <% % tags alone? I've been fiddling with options under "Tools - Options - Text Editor" for ages but alas am getting nowhere...

    Read the article

  • MATLAB Builder NE crash apppool on IIS 7.5

    - by Alkersan
    Im developing a web user interface for MATLAB functions with ASP.NET. Ive started with studying demos and stucked with such problem. I created a MyComponent.dll assembly with deploytool from MATLAB 2010a, target framework - 3.5. This component has one function GetKnot() which returns a figure. function df = getKnot() f = figure('Visible', 'off'); knot; df = webfigure(f); close(f); end Then I made simple webapp in visual studio 2008 sp1, with only one page Default.aspx. I added references to MWArray.dll, WebFiguresService.dll and MyComponent.dll. The codeBehind is: using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using MyComponent; using MathWorks.MATLAB.NET.WebFigures; namespace MATLAB_WebApplication { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { var myComponentClass = new MyComponentClass(); var x = myComponentClass.getKnot(); //WebFigureControl1.WebFigure = new WebFigure(); } } } When I run this page on Visual Studio`s Development web server - everything is fine, figure works. But when I`m trying to deploy webfigure on my local iis 7.5 which runs on Win7 x32 - iis app pool crashes. There is an entry in System Event Log "A process serving application pool 'Classic .NET AppPool' suffered a fatal communication error with the Windows Process Activation Service. The process id was '3676'. The data field contains the error number 6D000780". This happens when MyComponent is instantiating. What I could forget when moved to IIS? Other examples, like magic square console application, runs perfect, and every matlab component instantiating, but not in IIS environment.

    Read the article

  • htaccess redirect http to https on a magento site

    - by joesalvator
    I have a magento site and i need to have it redirect to https, i have the cert installed but i am not sure how to mod the htaccess file? here is a copy of the root htaccess file thanks # uncomment these lines for CGI mode make sure to specify the correct cgi php binary file name it might be /cgi-bin/php-cgi Action php5-cgi /cgi-bin/php5-cgi AddHandler php5-cgi .php # GoDaddy specific options Options -MultiViews you might also need to add this line to php.ini cgi.fix_pathinfo = 1 if it still doesn't work, rename php.ini to php5.ini # this line is specific for 1and1 hosting #AddType x-mapp-php5 .php #AddHandler x-mapp-php5 .php # default index file DirectoryIndex index.php # adjust memory limit php_value memory_limit 64M php_value memory_limit 128M php_value max_execution_time 18000 # disable magic quotes for php request vars php_flag magic_quotes_gpc off # disable automatic session start before autoload was initialized php_flag session.auto_start off # enable resulting html compression #php_flag zlib.output_compression on # disable user agent verification to not break multiple image upload php_flag suhosin.session.cryptua off # turn off compatibility with PHP4 when dealing with objects php_flag zend.ze1_compatibility_mode Off # disable POST processing to not break multiple image upload SecFilterEngine Off SecFilterScanPOST Off # # Insert filter on all content ###SetOutputFilter DEFLATE # Insert filter on selected content types only #AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript # Netscape 4.x has some problems... #BrowserMatch ^Mozilla/4 gzip-only-text/html # Netscape 4.06-4.08 have some more problems #BrowserMatch ^Mozilla/4\.0[678] no-gzip # MSIE masquerades as Netscape, but it is fine #BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Don't compress images #SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary # Make sure proxies don't deliver the wrong content #Header append Vary User-Agent env=!dont-vary # make HTTPS env vars available for CGI mode SSLOptions StdEnvVars # enable rewrites Options +FollowSymLinks RewriteEngine on # you can put here your magento root folder path relative to web root #RewriteBase /magento/ # workaround for HTTP authorization in CGI environment RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # always send 404 on missing files in these folders RewriteCond %{REQUEST_URI} !^/(media|skin|js)/ # never rewrite for existing files, directories and links RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l # rewrite everything else to index.php RewriteRule .* index.php [L] # Prevent character encoding issues from server overrides If you still have problems, use the second line instead AddDefaultCharset Off #AddDefaultCharset UTF-8 # s ExpiresDefault "access plus 1 year" # By default allow all access Order allow,deny Allow from all

    Read the article

  • How to reproduce System.Security.Cryptography.SHA1Managed result in Python

    - by joetyson
    Here's the deal: I'm moving a .NET website to Python. I have a database with passwords hashed using the System.Security.Cryptography.SHA1Managed utility. I'm creating the hash in .NET with the following code: string hashedPassword = Cryptographer.CreateHash("MYHasher", userInfo.Password); The MYHasher block looks like this: <add algorithmType="System.Security.Cryptography.SHA1Managed, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=blahblahblah" saltEnabled="true" type="Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.HashAlgorithmProvider, Microsoft.Practices.EnterpriseLibrary.Security.Cryptography, Version=3.0.0.0, Culture=neutral, PublicKeyToken=daahblahdahdah" name="MYHasher" /> So for a given password, I get back and store in the database a 48 byte salted sha1. I assume the last 8 bytes are the salt. I have tried to reproduce the hashing process in python by doing a sha1(salt + password) and sha1(password + salt) but I'm having no luck. My question to you: How are the public keys being used? How is the password rehashed using the salt. How is the salt created? (e.g., When I say saltEnabled="true", what extra magic happens?) I need specific details that don't just reference other .NET libraries, I'm looking for the actual operational logic that happens in the blackbox. Thanks!

    Read the article

  • Ruby on rails generates tests for you. Do those give a false sense of a safety net?

    - by Hamish Grubijan
    Disclaimer: I have not used RoR, and I have not generated tests. But, I will still dare to post this question. Quality Assurance is theoretically impossible to get 100% right in general (Undecidable problem ;), and it is hard in practice. So many developers do not understand that writing good automated tests is an art, and it is hard. When I hear that RoR generates the tests for you, I get very skeptical. It cannot be that easy. Testing is a general concept; it applies across languages. So does the concept of code contracts, it is similar for languages that support it. Code contracts do not generate themselves. The programmer must add the requirements and the promises manually, after doing some thinking about the algorithm / function. If a human gets it wrong, then the tools will propagate the error. Similarly with testing - it takes human judgement about what should happen. Tests do not write themselves, and we are far from the day when a business analyst can just have a conversation with a computer and tell it informally what the requirements are and have the computer do all the work. There is no magic ... how can RoR generate good tests for you? Please shed some light on this. Opinions are ok, for this is a community wiki. Thanks!

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >