Search Results

Search found 64804 results on 2593 pages for 'david hope ross(at)oracle com'.

Page 250/2593 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • How can I associate my main domain to a RSS?

    - by Renan
    This is an institutional site which uses Tumblr to keep the visitors updated. The Tumblr is linked via frames in the subdirectory /blog/ of the domain to track stats. My question is: How can I allow my visitors to retrieve the Tumblr updates by simply inputing www.domain.com in their feed reader instead of domain.tumblr.com? OR How can I allow my visitors to retrieve the Tumblr updates by simply inputing www.domain.com/blog in their feed reader instead of domain.tumblr.com? PS. For some reason the main domain is still associated with the old blogspot address.

    Read the article

  • Any Good Cocos2d Pause Menu Library

    - by Mahbubur R Aaman
    Background : From http://code.google.com/p/cocos2d-iphone/issues/detail?id=173 Scenes/Nodes doesn't support the CocosNodeOpacity protocol. From http://playsnackgames.com/blog/2011/09/cocos2d-tutorial-creating-a-reusable-pause-layer/ Cocos2d offers a simple method to pause and resume itself, but these methods stop the CCDirector (the class that manages most aspects of a Cocos2d’s app lifetime) from running actions and lower the fps to 5 to conserve battery life. Related issues http://www.cocos2d-iphone.org/forum/topic/4368 http://www.cocos2d-iphone.org/forum/topic/151 http://stackoverflow.com/questions/5852354/cocos2d-engine-pause-resume http://stackoverflow.com/questions/11878450/how-to-pause-a-layer-in-cocos2d-2-0 Question : Is there any Good Cocos2d Pause Menu Library solving these tricky issues? This will save many hours of Game Developer's life.

    Read the article

  • Minecraft Style Chunk building problem

    - by David Torrey
    I'm having some problems with speed in my chunk engine. I timed it out, and in its current state it takes a total ~5 seconds per chunk to fill each face's list. I have a check to see if each face of a block is visible and if it is not visible, it skips it and moves on. I'm using a dictionary (unordered map) because it makes sense memorywise to just not have an entry if there is no block. I've tracked my problem down to testing if there is an entry, and accessing an entry if it does exist. If I remove the tests to see if there is an entry in the dictionary for an adjacent block, or if the block type itself is seethrough, it runs within about 2-4 milliseconds. so here's my question: Is there a faster way to check for an entry in a dictionary than .ContainsKey()? As an aside, I tried TryGetValue() and it doesn't really help with the speed that much. If I remove the ContainsKey() and keep the test where it does the IsSeeThrough for each block, it halves the time, but it's still about 2-3 seconds. It only drops to 2-4ms if I remove BOTH checks. Here is my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Runtime.InteropServices; using OpenTK; using OpenTK.Graphics.OpenGL; using System.Drawing; namespace Anabelle_Lee { public enum BlockEnum { air = 0, dirt = 1, } [StructLayout(LayoutKind.Sequential,Pack=1)] public struct Coordinates<T1> { public T1 x; public T1 y; public T1 z; public override string ToString() { return "(" + x + "," + y + "," + z + ") : " + typeof(T1); } } public struct Sides<T1> { public T1 left; public T1 right; public T1 top; public T1 bottom; public T1 front; public T1 back; } public class Block { public int blockType; public bool SeeThrough() { switch (blockType) { case 0: return true; } return false ; } public override string ToString() { return ((BlockEnum)(blockType)).ToString(); } } class Chunk { private Dictionary<Coordinates<byte>, Block> mChunkData; //stores the block data private Sides<List<Coordinates<byte>>> mVBOVertexBuffer; private Sides<int> mVBOHandle; //private bool mIsChanged; private const byte mCHUNKSIZE = 16; public Chunk() { } public void InitializeChunk() { //create VBO references #if DEBUG Console.WriteLine ("Initializing Chunk"); #endif mChunkData = new Dictionary<Coordinates<byte> , Block>(); //mIsChanged = true; GL.GenBuffers(1, out mVBOHandle.left); GL.GenBuffers(1, out mVBOHandle.right); GL.GenBuffers(1, out mVBOHandle.top); GL.GenBuffers(1, out mVBOHandle.bottom); GL.GenBuffers(1, out mVBOHandle.front); GL.GenBuffers(1, out mVBOHandle.back); //make new list of vertexes for each face mVBOVertexBuffer.top = new List<Coordinates<byte>>(); mVBOVertexBuffer.bottom = new List<Coordinates<byte>>(); mVBOVertexBuffer.left = new List<Coordinates<byte>>(); mVBOVertexBuffer.right = new List<Coordinates<byte>>(); mVBOVertexBuffer.front = new List<Coordinates<byte>>(); mVBOVertexBuffer.back = new List<Coordinates<byte>>(); #if DEBUG Console.WriteLine("Chunk Initialized"); #endif } public void GenerateChunk() { #if DEBUG Console.WriteLine("Generating Chunk"); #endif for (byte i = 0; i < mCHUNKSIZE; i++) { for (byte j = 0; j < mCHUNKSIZE; j++) { for (byte k = 0; k < mCHUNKSIZE; k++) { Random blockLoc = new Random(); Coordinates<byte> randChunk = new Coordinates<byte> { x = i, y = j, z = k }; mChunkData.Add(randChunk, new Block()); mChunkData[randChunk].blockType = blockLoc.Next(0, 1); } } } #if DEBUG Console.WriteLine("Chunk Generated"); #endif } public void DeleteChunk() { //delete VBO references #if DEBUG Console.WriteLine("Deleting Chunk"); #endif GL.DeleteBuffers(1, ref mVBOHandle.left); GL.DeleteBuffers(1, ref mVBOHandle.right); GL.DeleteBuffers(1, ref mVBOHandle.top); GL.DeleteBuffers(1, ref mVBOHandle.bottom); GL.DeleteBuffers(1, ref mVBOHandle.front); GL.DeleteBuffers(1, ref mVBOHandle.back); //clear all vertex buffers ClearPolyLists(); #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void UpdateChunk() { #if DEBUG Console.WriteLine("Updating Chunk"); #endif ClearPolyLists(); //prepare buffers //for every entry in mChunkData map foreach(KeyValuePair<Coordinates<byte>,Block> feBlockData in mChunkData) { Coordinates<byte> checkBlock = new Coordinates<byte> { x = feBlockData.Key.x, y = feBlockData.Key.y, z = feBlockData.Key.z }; //check for polygonson the left side of the cube if (checkBlock.x > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x - 1, checkBlock.y, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } //check for polygons on the right side of the cube if (checkBlock.x < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x + 1, checkBlock.y, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } if (checkBlock.y > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y - 1, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } //check for polygons on the right side of the cube if (checkBlock.y < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y + 1, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } if (checkBlock.z > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z - 1)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } //check for polygons on the right side of the cube if (checkBlock.z < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z + 1)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } BuildBuffers(); #if DEBUG Console.WriteLine("Chunk Updated"); #endif } public void RenderChunk() { } public void LoadChunk() { #if DEBUG Console.WriteLine("Loading Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void SaveChunk() { #if DEBUG Console.WriteLine("Saving Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Saved"); #endif } private bool IsVisible(int pX,int pY,int pZ) { Block testBlock; Coordinates<byte> checkBlock = new Coordinates<byte> { x = Convert.ToByte(pX), y = Convert.ToByte(pY), z = Convert.ToByte(pZ) }; if (mChunkData.TryGetValue(checkBlock,out testBlock )) //if data exists { if (testBlock.SeeThrough() == true) //if existing data is not seethrough { return true; } } return true; } private void AddPoly(byte pX, byte pY, byte pZ, int BufferSide) { //create temp array GL.BindBuffer(BufferTarget.ArrayBuffer, BufferSide); if (BufferSide == mVBOHandle.front) { //front face mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.right) { //back face mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.top) { //left face mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.bottom) { //right face mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.front) { //top face mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.back) { //bottom face mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); } } private void BuildBuffers() { #if DEBUG Console.WriteLine("Building Chunk Buffers"); #endif GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.front); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.front.Count), mVBOVertexBuffer.front.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.back); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.back.Count), mVBOVertexBuffer.back.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.left); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.left.Count), mVBOVertexBuffer.left.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.right); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.right.Count), mVBOVertexBuffer.right.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.top); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.top.Count), mVBOVertexBuffer.top.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.bottom); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.bottom.Count), mVBOVertexBuffer.bottom.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer,0); #if DEBUG Console.WriteLine("Chunk Buffers Built"); #endif } private void ClearPolyLists() { #if DEBUG Console.WriteLine("Clearing Polygon Lists"); #endif mVBOVertexBuffer.top.Clear(); mVBOVertexBuffer.bottom.Clear(); mVBOVertexBuffer.left.Clear(); mVBOVertexBuffer.right.Clear(); mVBOVertexBuffer.front.Clear(); mVBOVertexBuffer.back.Clear(); #if DEBUG Console.WriteLine("Polygon Lists Cleared"); #endif } }//END CLASS }//END NAMESPACE

    Read the article

  • Masking a redirection in IIS7

    - by SydxPages
    My tools: IIS7 1 x Windows 2008 Server IIS URL Rewrite Module 2 (installed) My requirement: Mask the redirection of www.bob.com to www.abc.com/bob/index.html - the end user should not see the www.abc.com The user should then be able to browse the website as normal. I have found references to installing AAR, however this seems to be more for load balancing etc? Then others have said use a 3rd party tool etc.

    Read the article

  • What is a name for a job where you do system analysis, project management and data diagramming?

    - by David Archer
    In the last 4 months I've been able to manage a team and step away from the coding for a bit. I've been planning the system in full (both System Analysis and project managing, alongside action and data diagramming) writing the technical documentation, the code's architecture, keeping track of the other guys doing the actual coding, QA, bug reports and dealing with clients. I had to take two days' training on node.js just to see if it would be suitable for a project we were considering. Is there a name for this job? Project Manager and Systems Architect don't quite seem to have the same stuff, and IT manager seems way off. I only want to know so that I can get some qualification towards it and try to move into this kind of work full-time.

    Read the article

  • Alternatives to Google Website Optimizer

    - by yahelc
    What (affordable) alternatives are there to Google Website Optimizer for A/B and multivariate tests? The pro's with GWO are basically that its free and that it integrates with Google Analytics. The cons: The relative high time cost of setting up a test. Some alternatives I've seen so far: Optimizely.com VisualWebsiteOptimizer.com Genetify (wiki.github.com/gregdingle/genetify/) Free, open-source, but seems like there's no developer community committed to the project.

    Read the article

  • Translate jQuery UI Datepicker format to .Net Date format

    - by Michael Freidgeim
    I needed to use the same date format in client jQuery UI Datepicker and server ASP.NET code. The actual format can be different for different localization cultures.I decided to translate Datepicker format to .Net Date format similar as it was asked to do opposite operation in http://stackoverflow.com/questions/8531247/jquery-datepickers-dateformat-how-to-integrate-with-net-current-culture-date Note that replace command need to replace whole words and order of calls is importantFunction that does opposite operation (translate  .Net Date format toDatepicker format) is described in http://www.codeproject.com/Articles/62031/JQueryUI-Datepicker-in-ASP-NET-MVC /// <summary> /// Uses regex '\b' as suggested in //http://stackoverflow.com/questions/6143642/way-to-have-string-replace-only-hit-whole-words /// </summary> /// <param name="original"></param> /// <param name="wordToFind"></param> /// <param name="replacement"></param> /// <param name="regexOptions"></param> /// <returns></returns> static public string ReplaceWholeWord(this string original, string wordToFind, string replacement, RegexOptions regexOptions = RegexOptions.None) { string pattern = String.Format(@"\b{0}\b", wordToFind); string ret=Regex.Replace(original, pattern, replacement, regexOptions); return ret; } /// <summary> /// E.g "DD, d MM, yy" to ,"dddd, d MMMM, yyyy" /// </summary> /// <param name="datePickerFormat"></param> /// <returns></returns> /// <remarks> /// Idea to replace from http://stackoverflow.com/questions/8531247/jquery-datepickers-dateformat-how-to-integrate-with-net-current-culture-date ///From http://docs.jquery.com/UI/Datepicker/$.datepicker.formatDate to http://msdn.microsoft.com/en-us/library/8kb3ddd4.aspx ///Format a date into a string value with a specified format. ///d - day of month (no leading zero) ---.Net the same ///dd - day of month (two digit) ---.Net the same ///D - day name short ---.Net "ddd" ///DD - day name long ---.Net "dddd" ///m - month of year (no leading zero) ---.Net "M" ///mm - month of year (two digit) ---.Net "MM" ///M - month name short ---.Net "MMM" ///MM - month name long ---.Net "MMMM" ///y - year (two digit) ---.Net "yy" ///yy - year (four digit) ---.Net "yyyy" /// </remarks> public static string JQueryDatePickerFormatToDotNetDateFormat(string datePickerFormat) { string sRet = datePickerFormat.ReplaceWholeWord("DD", "dddd").ReplaceWholeWord("D", "ddd"); sRet = sRet.ReplaceWholeWord("M", "MMM").ReplaceWholeWord("MM", "MMMM").ReplaceWholeWord("m", "M").ReplaceWholeWord("mm", "MM");//order is important sRet = sRet.ReplaceWholeWord("yy", "yyyy").ReplaceWholeWord("y", "yy");//order is important return sRet; }

    Read the article

  • Squid site redirection

    - by AndyM
    I have an internal website that cannot be accessed from some machines on my network, due to the physical location, VPN ,network ranges etc. I would like to install Squid on "in between" network to forward request from the clients that cannot reach the website. The issue is the clients have no ability to connect to www.example.com , but they can reach a network with a squid proxy , which in turn can reach www.example.com What is the correct term I need to research in squid , is it just caching www.example.com or do I need to set the clients to use a URL that gets rewritten ? i.e www.squid-example.com -- www.example.com

    Read the article

  • IIS 7 URL rewrite rule

    - by Andrew
    Hello, guys! We have here to web servers behind a router - one IIS and one Tomcat (on different machines / IP addresses). The domain is pointing to out external IP, which is forwarded to IIS (internal IP 192.168.1.10 for example). I'm trying to do the following: when [www.]ourdomain.com is entered the default web site on IIS have to be loaded (this part is ok), but when test.ourdomain.com is entered I want to redirect this request to another web server (192.168.1.11 for example). I created a site "test" on IIS and it is displayed when test.ourdomain.com is entered. Then I tried to redirect it with following rule: Requested URL matches the pattern: * (using wildcards) Condition: {HTTP_HOST} matches test.ourdomain.com Action type: Rewrite Rewrite URL: http://192.168.1.11/{R:0} but when I try to load test.ourdomain.com now I get IIS's error 404 page. Obviously I'm wrong :-) How can I do such a redirect?

    Read the article

  • Mirror virtualized development environment

    - by David Casillas
    I work alone in some iOS projects in a local environment. I have been thinking in a way to be able to share my development environment between my Mac Mini and my MacBook. I mostly work at home in the Mini but sometimes I need to do a demo or work outside and I would like to have the development environment mirrored in both. I have think in using a virtual machine (via VirtualBox) with just my development tools instaled. Then I could synchronize that VM with some software between both computers so I will always have the exact environment no matter what computer I use. Is there any good reason not do do this way? I have not used Virtualization to much so I have no background on the subject. My basic setup will be: Mac Mini: i7 dual Core, 8Gb. OSX Mountain Lion Host OS: MacBook: 2.4 Core 2 Duo. 4Gb. OSX Lion Host OS. Virtual Box with Mountain Lion guest OS in both machines. XCode5, Simulator.

    Read the article

  • How do I receive email sent to postmaster?

    - by jonescb
    I have a VPS server that I would like to get an SSL certificate for, and the CA needs an email address to verify that I own the domain. The options are: postermaster@mydomain.com, hostmaster@mydomain.com, webmaster@mydomain.com, and an address to @whoisguard.com. The server runs CentOS 5, and all I have set up for email is sendmail. I don't have POP3 or IMAP. According to this Wikipedia article on Postmaster, it says that all SMTP servers support postmaster and it cites RFC 5321. Does sendmail conform to this? I tried sending a test mail to postermaster@mydomain.com, but I don't know how to receive it on my server. Do I need to open up any ports? I haven't gotten a message back saying that my test mail failed to send, so my server must have gotten it.

    Read the article

  • Set up internal domain to use external SMTP in Exchange 2007

    - by Geoffrey
    I'm moving to Google Apps and have setup dual-delivery. Everything is fine, but for mail sent internally (from sally@mydomain.com to [email protected]), Exchange is not using the send connectors I have pointing to Google's servers. I believe my question is similar to this question: How to force internal email through an smtp connector in exchange 2007 Again, if a user is connected to the Exchange server and tries to send to george@outsidedomain.com it works just fine, but I cannot seem to force *@mydomain.com to route correctly. This should be a fairly simple, but according to this: google.com/support/forum/p/Google+Apps/thread?tid=30b6ad03baa57289&hl=en (can't post two links due to spam prevention) It does not seem possible. Any ideas?

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • Solaris 11 Update 1 - Link Aggregation

    - by Wesley Faria
    Solaris 11.1 No início desse mês em um evento mundial da Oracle chamado Oracle Open World foi lançada a nova release do Solaris 11. Ela chega cheia de novidades, são aproximadamente 300 novas funcionalidade em rede, segurança, administração e outros. Hoje vou falar de uma funcionalidade de rede muito interessante que é o Link Aggregation. O Solaris já suporta Link Aggregation desde Solaris 10 Update 1 porem no Solaris 11 Update 1 tivemos incrementos significantes. O Link Aggregation como o próprio nome diz, é a agregação de mais de uma inteface física de rede em uma interface lógica .Veja agumas funcionalidade do Link Aggregation: · Aumentar a largura da banda; · Imcrementar a segurança fazendo Failover e Failback; · Melhora a administração da rede; O Solaris 11.1 suporta 2(dois) tipos de Link Aggregation o Trunk aggregation e o Datalink Multipathing aggregation, ambos trabalham fazendo com que o pacote de rede seja distribuído entre as intefaces da agregação garantindo melhor utilização da rede.vamos ver um pouco melhor cada um deles. Trunk Aggregation O Trunk Aggregation tem como objetivo aumentar a largura de banda, seja para aplicações que possue um tráfego de rede alto seja para consolidação. Por exemplo temos um servidor que foi adquirido para comportar várias máquinas virtuais onde cada uma delas tem uma demanda e esse servidor possue 2(duas) placas de rede. Podemos então criar uma agregação entre essas 2(duas) placas de forma que o Solaris 11.1 vai enchergar as 2(duas) placas como se fosse 1(uma) fazendo com que a largura de banda duplique, veja na figura abaixo: A figura mostra uma agregação com 2(duas) placas físicas NIC 1 e NIC 2 conectadas no mesmo switch e 2(duas) interfaces virtuais VNIC A e VNIC B. Porem para que isso funcione temos que ter um switch com suporte a LACP ( Link Aggregation Control Protocol ). A função do LACP é fazer a aggregação na camada do switch pois se isso não for feito o pacote que sairá do servidor não poderá ser montado quando chegar no switch. Uma outra forma de configuração do Trunk Aggregation é o ponto-a-ponto onde ao invéz de se usar um switch, os 2 servidores são conectados diretamente. Nesse caso a agregação de um servidor irá falar diretamente com a agregação do outro garantindo uma proteção contra falhas e tambem uma largura de banda maior. Vejamos como configurar o Trunk Aggregation: 1 – Verificando quais intefaces disponíveis # dladm show-link 2 – Verificando interfaces # ipadm show-if 3 – Apagando o endereçamento das interfaces existentes # ipadm delete-ip <interface> 4 – Criando o Trunk aggregation # dladm create-aggr -L active -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Data Link Multipath Aggregation Como vimos anteriormente o Trunk aggregation é implementado apenas 1(um) switch que possua suporte a LACP portanto, temos um ponto único de falha que é o switch. Para solucionar esse problema no Solaris 10 utilizavamos o IPMP ( IP Multipathing ) que é a combinação de 2(duas) agregações em um mesmo link ou seja, outro camada de virtualização. Agora com o Solaris 11 Update 1 isso não é mais necessário, voce pode ter uma agregação de 2(duas) interfaces físicas e cada uma conectada a 1(um) swtich diferente, veja a figura abaixo: Temos aqui uma agregação chamada aggr contendo 4(quatro) interfaces físicas sendo que as interfaces NIC 1 e NIC 2 estão conectadas em um Switch e as intefaces NIC 3 e NIC 4 estão conectadas em outro Swicth. Além disso foram criadas mais 4(quatro) interfaces virtuais vnic A, vnic B, vnic C e vnic D que podem ser destinadas a diferentes aplicações/zones. Com isso garantimos alta disponibilidade em todas a camadas pois podemos ter falhas tanto em switches, links como em interfaces de rede físicas. Para configurar siga os mesmo passos da configuração do Trunk Aggregation até o passo 3 depois faça o seguinte: 4 – Criando o Trunk aggregation # dladm create-aggr -m haonly -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Depois de configurado seja no modo Trunk aggregation ou no modo Data Link Multipathing aggregation pode ser feito a troca de um modo para o outro, pode adcionar e remover interfaces físicas ou vituais. Bem pessoal, era isso que eu tinha para mostar sobre a nova funcionalidade do Link Aggregation do Solaris 11 Update 1 espero que tenham gostado, até uma próxima novidade.

    Read the article

  • Skoncujte s anonymitou koncových uživatelu (1/2)

    - by david.krch
    Znalost identity koncového uživatele ve všech vrstvách systému je základní nutností pri tvorbe bezpecných aplikací. Dnes si ukážeme, jak muže program pres Client Identifier predávat databázovému serveru tuto informaci i v prípade, kdy aplikace sdílí stejné pripojení do databáze pro všechny uživatele, jak je to bežné v dnešních webových aplikacích.

    Read the article

  • Thousands of 404 errors in Google Webmaster Tools

    - by atticae
    Because of a former error in our ASP.Net application, created by my predecessor and undiscovered for a long time, thousands of wrong URLs where created dynamically. The normal user did not notice it, but Google followed these links and crawled itself through these incorrect URLs, creating more and more wrong links. To make it clearer, consider the url example.com/folder should create the link example.com/folder/subfolder but was creating example.com/subfolder instead. Because of bad url rewriting, this was accepted and by default showed the index page for any unknown url, creating more and more links like this. example.com/subfolder/subfolder/.... The problem is resolved by now, but now I have thousands of 404 errors listed in the Google Webmaster Tools, which got discovered 1 or 2 years ago, and more keep coming up. Unfortunately the links do not follow a common pattern that I could deny for crawling in the robots.txt. Is there anything I can do to stop google from trying out those very old links and remove the already listed 404s from Webmaster Tools?

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-28

    - by Bob Rhubart
    Beware the 'Facebook Effect' when service-orienting information technology | Joe McKenrick www.zdnet.com Experiences seen with Facebook provide a fair warning to shared-service providers in enterprises. Cookbook: SES and UCM setup | George Maggessy blogs.oracle.com WebCenter A-Team member George Maggessy guides you through setting up the integration between UCM and SES. Using Oracle VM with Amazon EC2 | Marc Fielding www.pythian.com "If you’re planning on running Oracle VM with Amazon EC2, there are some important limitations you should know about," says Pythian's Marc Fielding. Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN blogs.oracle.com Oracle Enterprise Pack for Eclipse (OEPE) 12.1.1.0.1 was released to OTN last week with support for new standards and several new features. Thought for the Day "If the mind really is the finest computer, then there are a lot of people out there who need to be rebooted." — Tim Bryce

    Read the article

  • ASP.Net MVC - how to post values to the server that are not in an input element

    - by David Carter
    Problem As was mentioned in a previous blog I am building a web page that allows the user to select dates in a calendar and then shows the dates in an unordered list. The problem now is that those dates need to be sent to the server on page submit so that they can be saved to the database. If I was storing the dates in an input element, say a textbox, that wouldn't be an issue but because they are in an html element whose contents are not posted to the server an alternative strategy needs to be developed. Solution The approach that I took to solve this problem is as follows: 1. Place a hidden input field on the form <input id="hiddenDates" name="hiddenDates" type="hidden" value="" /> ASP.Net MVC has an Html helper with a method called Hidden() that will do this for you @Html.Hidden("hiddenDates"). 2. Copy the values from the html element to the hidden input field before submitting the form The following javascript is added to the page:        $(function () {          $('#formCreate').submit(function () {               PopulateHiddenDates();          });        });            function PopulateHiddenDates() {          var dateValues = '';          $($('#dateList').children('li')).each(function(index) {             dateValues += $(this).attr("id") + ",";          });          $('#hiddenDates').val(dateValues);        } I'm using jQuery to bind to the form submit event so that my method to populate the hidden field gets called before the form is submitted. The dateList element is an unordered list and by using the jQuery each function I can itterate through all the <li> items that it contains, get each items id attribute (to which I have assigned the value of the date in millisecs) and write them to the hidden field as a comma delimited string. 3. Process the dates on the server        [HttpPost]         public ActionResult Create(string hiddenDates, string utcOffset)         {            List<DateTime> dates = GetDates(hiddenDates, utcOffset);         }         private List<DateTime> GetDates(string hiddenDates, int utcOffset)         {             List<DateTime> dates = new List<DateTime>();             var values = hiddenDates.Split(",".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);             foreach (var item in values)             {                 DateTime newDate = new DateTime(1970, 1, 1).AddMilliseconds(double.Parse(item)).AddMinutes(utcOffset*-1);                 dates.Add(newDate);                }             return dates;         } By declaring a parameter with the same name as the hidden field ASP.Net will take care of finding the corresponding entry in the form collection posted back to the server and binding it to the hiddenDates parameter! Excellent! I now have my dates the user selected and I can save them to the database. I have also used the same technique to pass back a utcOffset so that I know what timezone the user is in and I can show the dates correctly to users in other timezones if necessary (this isn't strictly necessary at the moment but I plan to introduce times later), Saving multiple dates from an unordered list - DONE!

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • chrome os in triple boot with ubuntu (elemntary os)(ubuntu gnome) and windows 8.1

    - by Aniel Arias
    hi im wondering how to put/ install chrome os n hard drive with dual boot with Ubuntu and windows 8.1 please i need help with this. i had follow some guides from here https://sites.google.com/site/installationubuntu/chrome-os/make-your-own-chromium-os-notebook and http://unix.stackexchange.com/questions/29283/install-chromium-os-without-usb-disk please contact me at Facebook aniel arias or my email anielarias@yahoo.com thank you

    Read the article

  • Email is stuck in the queue with 421 4.2.2 Connection dropped due to SocketError

    - by e0594cn
    We recently installed an Exchange 2010 Server and we are having some problems sending emails to certain domains. Email is stuck in the queue with 421 4.2.2 Connection dropped due to SocketError. Any Suggestion? The below is the message when using telnet command: EHLO etla.com.cn 250-aa6061.com Hello [58.215.221.50] 250-TURN 250-SIZE 15360000 250-ETRN 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-CHUNKING 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XEXCH50 250 OK MAIL FROM:peilin@etla.com.cn 250 2.1.0 peilin@etla.com.cn....Sender OK RCPT TO:alisa@aa6061.com NOTIFY=success,failure **550 5.7.1 Your email messages have been blocked by the recipient OR by Trend Mic ro Email Reputation Service. Contact the recipient or his/her administrator usin g alternate means to resolve the issue.**

    Read the article

  • ODI 11g - Scripting a Reverse Engineer

    - by David Allan
    A common question is related to how to script the reverse engineer using the ODI SDK. This follows on from some of my posts on scripting in general and accelerated model and topology setup. Check out this viewlet here to see how to define a reverse engineering process using ODI's package. Using the ODI SDK, you can script this up using the OdiPackage and StepOdiCommand classes as follows;  OdiPackage pkg = new OdiPackage(folder, "Pkg_Rev"+modName);   StepOdiCommand step1 = new StepOdiCommand(pkg,"step1_cmd_reset");   step1.setCommandExpression(new Expression("OdiReverseResetTable \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step2 = new StepOdiCommand(pkg,"step2_cmd_reset");   step2.setCommandExpression(new Expression("OdiReverseGetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step3 = new StepOdiCommand(pkg,"step3_cmd_reset");   step3.setCommandExpression(new Expression("OdiReverseSetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   pkg.setFirstStep(step1);   step1.setNextStepAfterSuccess(step2);   step2.setNextStepAfterSuccess(step3); The biggest leap of faith for users is getting to know which SDK classes have to be used to build the objects in the design, using StepOdiCommand isn't necessarily obvious, once you see it in action though it is very simple to use. The above snippet uses an OdiModel variable named mod, its a snippet I added to the accelerated model creation script in the post linked above.

    Read the article

  • Using IP Restrictions with URL Rewrite-Week 25

    - by OWScott
    URL Rewrite offers tremendous flexibility for customizing rules to your environment. One area of functionality that is often desired for URL Rewrite is to allow a large list of approved or denied IP addresses and subnet ranges. IIS’s original IP Restrictions is helpful for fully blocking an IP address, but it doesn’t offer the flexibility that URL Rewrite does. An example where URL Rewrite is helpful is where you want to allow only authorized IPs to access staging.yoursite.com, but where staging.yoursite.com is part of the same site as www.yoursite.com. This requires conditional logic for the user’s IP. This lesson covers this unique situation while also introducing Rewrite Maps, server variables, and pairing rules to add more flexibility. This is week 25 of a 52 week series for the Web Pro. Past and future videos can be found here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • Securing data sent to an unencrypted WiFi AP

    - by David Parunakian
    The business plan of a project I'm involved in assumes selling certain WiFi-enabled devices to end users. All these devices originally have an unencrypted connection and a standard SSID. The problem is that although the user can connect to it and set both a new SSID and a WPA passphrase, these are being sent to the AP in plain text and thus can be intercepted by anyone nearby with a sniffer. What's the best solution to this problem, and why? Initially set up an encrypted wireless network at the device and supply the user with a printed passphrase Buy an SSL certificate for the AP's default IP address or local domain name (the APs aren't supposed to work as a router and have a captive portal & dnsmasq installed, so all of them can pretend to be myunit.example.com, as far as I understand) Something different Thank you.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >