Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 184/356 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Figure: Create a new folder Give the local TFS WPG group full control of the directory   Figure: You need to use the App Tier Service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). Figure: The web.config for TFS is stored in the application folder <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> Figure: Adding this to the web.config will trigger a restart of the app pool Figure: Your web.config should look something like this The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Best of OTN - Week of November 4th

    - by CassandraClark-OTN
    It was another exciting week at OTN!  Lots of GREAT content to share.  If you had a favorite that you don't see listed let us know in the comment section below.  Java Community - JavaOne Sessions Online - We've posted 60 of the JavaOne sessions online, and we'll be rolling out more sessions every few weeks. This content is free, courtesy of Oracle.NetBeans 7.4 Released  - NetBeans 7.4 features HTML5 integration for Java EE and PHP development; support for Apache Cordova and JDK 8 preview features; enhancements to Maven, C/C++, and more.vJUG: Worldwide Virtual JUG Created - London Java Community leader and technical evangelist Simon Maple has created a Meetup called vJUG, with aim toward connecting Java Developers in the virtual world.Tori Wieldt, Java Community Manager Friday Funny: This is what REALLY happens when you give someone your business card ow.ly/q6aKUArchitect Community - Don't forget to register for the free Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence.  December 3rd, 2013 - Two great tracks, Design & Develop and Build, Deploy & Manage.   Why wait, register now!  Multi-Factor Authentication in Oracle WebLogic - Shailesh K. Mishra - Really good technical article on using multi-factor authentication to protect web applications deployed on Oracle WebLogic.Coherence*Web: Sharing an httpSessions Among Applications in Different Oracle WebLogic Clusters - Jordi VillenaUnderstanding when and how to select session attributes that must be stored in the local storage of the Oracle WebLogic instances and which should be leveraged to an Oracle Coherence distributed cache.  Bob Rhubart, Architect Community Manager Friday Funny - "Be yourself, everyone else is already taken." Oscar Wilde (October 16, 1854 - November 30, 1900) Irish writer and poet.

    Read the article

  • How to forward AIM to Gmail

    - by iamjames
    Still have an old AIM email address lying around and would like to forward it to Gmail?  Here's how: 1.  Login to your AIM and click on Settings on the far right 2.  In the left menu click IMAP and POP  3.  This shows you your IMAP and POP setup information for AIM.  We're going to put this into your Gmail account so your Gmail account will check your AIM account and download all AIM emails. 4.  Login to your Gmail, click Settings and click Accounts and Import 5.  Click "Import mail and contacts".  A new window will pop up asking what account you want to import.  Enter your AIM Email Address and click Continue 6.  The next page asks for your password.  Enter your password and click Continue.  Step 2 asks your Import options.  I'd put a checkmark in "Leave a copy of retrieved message on server".  That way all your mail is still stored on AIM if you ever need it. 7.  Click Start import and you're done.  Next screen says it make take several hours up to 2 days before you start seeing imported messages and can check the status at Settings > Accounts and Import

    Read the article

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6)

    - by Gopinath
    Problem You are designing a database table for a web application that requires to store IP address of users who visits the site. The IP address is required to be stored a character data in the table. To define size of the character column you need to know maximum length of IP address. So, what is the maximum length of an IP address? Solution The IPv4 version of IP address is in the following format 255.255.255.255 To store IPv4 address we require 15 characters. The IPv6 version of IP address is grouped into sets of 4 hex digits separated by colons, like the below 2001:0db8:85a3:0000:0000:8a2e:0370:7334 To store IPv6 address you require a 39 characters long column. Conclusion As IPv4 and IPv6 are the commonly use protocols, you better define a column with 39 characters length so that both the format address are saved in to the table without any issues. This article titled,Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6), was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Should I cache the data or hit the database?

    - by JD01
    I have not worked with any caching mechanisms and was wondering what my options are in the .net world for the following scenario. We basically have a a REST Service where the user passes an ID of a Category (think folder) and this category may have lots of sub categories and each of the sub categories could have 1000 of media containers (think file reference objects) which contain information about a file that may be on a NAS or SAN server (files are videos in this case). The relationship between these categories is stored in a database together with some permission rules and meta data about the sub categories. So from a UI perspective we have a lazy loaded tree control which is driven by the user by clicking on each sub folder (think of Windows explorer). Once they come to a URL of the video file, they then can watch the video. The number of users could grow into the 1000s and the sub categories and videos could be in the 10000s as the system grows. The question is should we carry on the way it is currently working where each request hits the database or should we think about caching the data? We are on using IIS 6/7 and Asp.net.

    Read the article

  • Delaying a Foreach loop half a second

    - by Sigh-AniDe
    I have created a game that has a ghost that mimics the movement of the player after 10 seconds. The movements are stored in a list and i use a foreach loop to go through the commands. The ghost mimics the movements but it does the movements way too fast, in split second from spawn time it catches up to my current movement. How do i slow down the foreach so that it only does a command every half a second? I don't know how else to do it. Please help this is what i tried : The foreach runs inside the update method DateTime dt = DateTime.Now; foreach ( string commandDirection in ghostMovements ) { int mapX = ( int )( ghostPostition.X / scalingFactor ); int mapY = ( int )( ghostPostition.Y / scalingFactor ); // If the dt is the same as current time if ( dt == DateTime.Now ) { if ( commandDirection == "left" ) { switch ( ghostDirection ) { case ghostFacingUp: angle = 1.6f; ghostDirection = ghostFacingRight; Program.form.direction = ""; dt.AddMilliseconds( 500 );// add half a second to dt break; case ghostFacingRight: angle = 3.15f; ghostDirection = ghostFacingDown; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingDown: angle = -1.6f; ghostDirection = ghostFacingLeft; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingLeft: angle = 0.0f; ghostDirection = ghostFacingUp; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; } } } }

    Read the article

  • Added resolution not working after upgrading to 12.04

    - by David
    After upgrading, my screen resolution (added by xrandr comands on the start) did not work like before. Messages appear showing some errors (that i have never had in 11.10). "No se pudo aplicar la configuración almacenada para los monitores"/"Can't apply the stored configuration for the monitors." This script didn't work either. xrandr --newmode "1280x1024_60.00" 109.00 1280 1368 1496 1712 1024 1027 1034 1063 -hsync +vsync xrandr --addmode VGA1 1280x1024_60.00 xrandr --output VGA1 --mode 1280x1024_60.00 I also tryed deleting monitors.xml but, nothing. This only erase the window message. It's been sayd that ##It's a normal buggy and well know problem for Pc's with Intel integrated video cards.## The new version of gnome-settings-daemon stores its configuration information in dconf rather than gconf. I tryed something, but the problem persist. This is what i did. Install the dconf-tools package, and then run dconf-editor. In the tree on the left, navigate org - gnome - settings-daemon - plugins - xrandr. Uncheck the active checkbox. restart your XServer (Ctrl+Alt+Backspace) (It didn't worked out for me, but it may be helpful to someone)

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • Standard Network Tiers in a Distributed N-Tier System

    Distributed N-Tier client/server architecture allows for segments of an application to be broken up and distributed across multiple locations on a network.  Listed below are standard tiers in a Distributed N-Tier System. End-User Client Tier The End-User Client is responsible for sending and receiving requests from web servers and other applications servers and translating the responses so that the End-User can interpret the data effectively. The primary roles for this tier are to communicate with servers and translate server responses back to the end-user to interpret. Business-Specific Functions Validate Data Display Data Send Data to Webserver Web Server Tier The Web server tier processes new requests for information coming in from the HTTP and HTTPS ports. This primarily handles the generation of user interfaces and calls the application server when needed to access Data and business logic when needed. Business-specific functions Send Data to application server Format Data for Display Validate Data Application Server Tier The application server stores and executes predefined business logic that is applied to various pieces of data as the business determines. The processed data is then returned back to the Webserver. Additionally, this server directly calls the database to obtain and store any data used by the system Business-Specific Functions Validate Data Process Data Send Data to Database Server Database Server Tier The Database Server is responsible for storing and returning all data need by the calling applications. The primary role for this this server is storage. Data is stored as needed and can be recalled at any point later in time. Business-Specific Functions Insert Data Delete Data Return Data to Application Server

    Read the article

  • Defining formula through user interface in user form

    - by BriskLabs Pakistan
    I am a student and developing a simple assignment - windows form application in visual studio 2010. The application is suppose to construct formulas as per user requirement. The process: It has to pick data from columns of Microsoft Access database and the user should be able to pick the data by column name like we do in a drop down menu. and create reusable formulas in it ( configure it once and can change it again). followings are column titles from database that can be picked for example. e.g Col -1 : Marks in Maths Col -2 : Total Marks in Maths Col -3 : Marks in science Col -4 : Total marks in science Finally we should be able to construct any formula in the UI like (Col 1 + Col 3 ) / ( col 2 + col 4) = Formula 1 once this is formula is set saved and a name is assigned to it by user. he/she can use the formula and results shall appear in a window below. i.e He would be able to calculate his desired figures (formula) by only manipulating underlying data on the UI layer....choose the data for a period and apply the formula and get the answer Problem: It looks like I have to create an app where rules are set through UI....... this means no stored procedures are required in SQL.... please suggest the right approach.

    Read the article

  • Windows Phone 7 : Dragging and flicking UI controls

    - by TechTwaddle
    Who would want to flick and drag UI controls!? There might not be many use cases but I think some concepts here are worthy of a post. So we will create a simple silverlight application for windows phone 7, containing a canvas element on which we’ll place a button control and an image and then, as the title says, drag and flick the controls. Here’s Mainpage.xaml, <Grid x:Name="LayoutRoot" Background="Transparent">   <Grid.RowDefinitions>     <RowDefinition Height="Auto"/>     <RowDefinition Height="*"/>   </Grid.RowDefinitions>     <!--TitlePanel contains the name of the application and page title-->   <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28">     <TextBlock x:Name="ApplicationTitle" Text="KINETICS" Style="{StaticResource PhoneTextNormalStyle}"/>     <TextBlock x:Name="PageTitle" Text="drag and flick" Margin="9,-7,0,0" Style="{StaticResource PhoneTextTitle1Style}"/>   </StackPanel>     <!--ContentPanel - place additional content here-->   <Grid x:Name="ContentPanel" Grid.Row="1" >     <Canvas x:Name="MainCanvas" HorizontalAlignment="Stretch" VerticalAlignment="Stretch">       <Canvas.Background>         <LinearGradientBrush StartPoint="0 0" EndPoint="0 1">           <GradientStop Offset="0" Color="Black"/>           <GradientStop Offset="1.5" Color="BlanchedAlmond"/>         </LinearGradientBrush>       </Canvas.Background>     </Canvas>   </Grid> </Grid> the second row in the main grid contains a canvas element, MainCanvas, with its horizontal and vertical alignment set to stretch so that it occupies the entire grid. The canvas background is a linear gradient brush starting with Black and ending with BlanchedAlmond. We’ll add the button and image control to this canvas at run time. Moving to Mainpage.xaml.cs the Mainpage class contains the following members, public partial class MainPage : PhoneApplicationPage {     Button FlickButton;     Image FlickImage;       FrameworkElement ElemToMove = null;     double ElemVelX, ElemVelY;       const double SPEED_FACTOR = 60;       DispatcherTimer timer; FlickButton and FlickImage are the controls that we’ll add to the canvas. ElemToMove, ElemVelX and ElemVelY will be used by the timer callback to move the ui control. SPEED_FACTOR is used to scale the velocities of ui controls. Here’s the Mainpage constructor, // Constructor public MainPage() {     InitializeComponent();       AddButtonToCanvas();       AddImageToCanvas();       timer = new DispatcherTimer();     timer.Interval = TimeSpan.FromMilliseconds(35);     timer.Tick += new EventHandler(OnTimerTick); } We’ll look at those AddButton and AddImage functions in a moment. The constructor initializes a timer which fires every 35 milliseconds, this timer will be started after the flick gesture completes with some inertia. Back to AddButton and AddImage functions, void AddButtonToCanvas() {     LinearGradientBrush brush;     GradientStop stop1, stop2;       Random rand = new Random(DateTime.Now.Millisecond);       FlickButton = new Button();     FlickButton.Content = "";     FlickButton.Width = 100;     FlickButton.Height = 100;       brush = new LinearGradientBrush();     brush.StartPoint = new Point(0, 0);     brush.EndPoint = new Point(0, 1);       stop1 = new GradientStop();     stop1.Offset = 0;     stop1.Color = Colors.White;       stop2 = new GradientStop();     stop2.Offset = 1;     stop2.Color = (Application.Current.Resources["PhoneAccentBrush"] as SolidColorBrush).Color;       brush.GradientStops.Add(stop1);     brush.GradientStops.Add(stop2);       FlickButton.Background = brush;       Canvas.SetTop(FlickButton, rand.Next(0, 400));     Canvas.SetLeft(FlickButton, rand.Next(0, 200));       MainCanvas.Children.Add(FlickButton);       //subscribe to events     FlickButton.ManipulationDelta += new EventHandler<ManipulationDeltaEventArgs>(OnManipulationDelta);     FlickButton.ManipulationCompleted += new EventHandler<ManipulationCompletedEventArgs>(OnManipulationCompleted); } this function is basically glorifying a simple task. After creating the button and setting its height and width, its background is set to a linear gradient brush. The direction of the gradient is from top towards bottom and notice that the second stop color is the PhoneAccentColor, which changes along with the theme of the device. The line,     stop2.Color = (Application.Current.Resources["PhoneAccentBrush"] as SolidColorBrush).Color; does the magic of extracting the PhoneAccentBrush from application’s resources, getting its color and assigning it to the gradient stop. AddImage function is straight forward in comparison, void AddImageToCanvas() {     Random rand = new Random(DateTime.Now.Millisecond);       FlickImage = new Image();     FlickImage.Source = new BitmapImage(new Uri("/images/Marble.png", UriKind.Relative));       Canvas.SetTop(FlickImage, rand.Next(0, 400));     Canvas.SetLeft(FlickImage, rand.Next(0, 200));       MainCanvas.Children.Add(FlickImage);       //subscribe to events     FlickImage.ManipulationDelta += new EventHandler<ManipulationDeltaEventArgs>(OnManipulationDelta);     FlickImage.ManipulationCompleted += new EventHandler<ManipulationCompletedEventArgs>(OnManipulationCompleted); } The ManipulationDelta and ManipulationCompleted handlers are same for both the button and the image. OnManipulationDelta() should look familiar, a similar implementation was used in the previous post, void OnManipulationDelta(object sender, ManipulationDeltaEventArgs args) {     FrameworkElement Elem = sender as FrameworkElement;       double Left = Canvas.GetLeft(Elem);     double Top = Canvas.GetTop(Elem);       Left += args.DeltaManipulation.Translation.X;     Top += args.DeltaManipulation.Translation.Y;       //check for bounds     if (Left < 0)     {         Left = 0;     }     else if (Left > (MainCanvas.ActualWidth - Elem.ActualWidth))     {         Left = MainCanvas.ActualWidth - Elem.ActualWidth;     }       if (Top < 0)     {         Top = 0;     }     else if (Top > (MainCanvas.ActualHeight - Elem.ActualHeight))     {         Top = MainCanvas.ActualHeight - Elem.ActualHeight;     }       Canvas.SetLeft(Elem, Left);     Canvas.SetTop(Elem, Top); } all it does is calculate the control’s position, check for bounds and then set the top and left of the control. OnManipulationCompleted() is more interesting because here we need to check if the gesture completed with any inertia and if it did, start the timer and continue to move the ui control until it comes to a halt slowly, void OnManipulationCompleted(object sender, ManipulationCompletedEventArgs args) {     FrameworkElement Elem = sender as FrameworkElement;       if (args.IsInertial)     {         ElemToMove = Elem;           Debug.WriteLine("Linear VelX:{0:0.00}  VelY:{1:0.00}", args.FinalVelocities.LinearVelocity.X,             args.FinalVelocities.LinearVelocity.Y);           ElemVelX = args.FinalVelocities.LinearVelocity.X / SPEED_FACTOR;         ElemVelY = args.FinalVelocities.LinearVelocity.Y / SPEED_FACTOR;           timer.Start();     } } ManipulationCompletedEventArgs contains a member, IsInertial, which is set to true if the manipulation was completed with some inertia. args.FinalVelocities.LinearVelocity.X and .Y will contain the velocities along the X and Y axis. We need to scale down these values so they can be used to increment the ui control’s position sensibly. A reference to the ui control is stored in ElemToMove and the velocities are stored as well, these will be used in the timer callback to access the ui control. And finally, we start the timer. The timer callback function is as follows, void OnTimerTick(object sender, EventArgs e) {     if (null != ElemToMove)     {         double Left, Top;         Left = Canvas.GetLeft(ElemToMove);         Top = Canvas.GetTop(ElemToMove);           Left += ElemVelX;         Top += ElemVelY;           //check for bounds         if (Left < 0)         {             Left = 0;             ElemVelX *= -1;         }         else if (Left > (MainCanvas.ActualWidth - ElemToMove.ActualWidth))         {             Left = MainCanvas.ActualWidth - ElemToMove.ActualWidth;             ElemVelX *= -1;         }           if (Top < 0)         {             Top = 0;             ElemVelY *= -1;         }         else if (Top > (MainCanvas.ActualHeight - ElemToMove.ActualHeight))         {             Top = MainCanvas.ActualHeight - ElemToMove.ActualHeight;             ElemVelY *= -1;         }           Canvas.SetLeft(ElemToMove, Left);         Canvas.SetTop(ElemToMove, Top);           //reduce x,y velocities gradually         ElemVelX *= 0.9;         ElemVelY *= 0.9;           //when velocities become too low, break         if (Math.Abs(ElemVelX) < 1.0 && Math.Abs(ElemVelY) < 1.0)         {             timer.Stop();             ElemToMove = null;         }     } } if ElemToMove is not null, we get the top and left values of the control and increment the values with their X and Y velocities. Check for bounds, and if the control goes out of bounds we reverse its velocity. Towards the end, the velocities are reduced by 10% every time the timer callback is called, and if the velocities reach too low values the timer is stopped and ElemToMove is made null. Here’s a short video of the program, the video is a little dodgy because my display driver refuses to run the animations smoothly. The flicks aren’t always recognised but the program should run well on an actual device (or a pc with better configuration), You can download the source code from here: ButtonDragAndFlick.zip

    Read the article

  • How do I architect 2 plugins that share a common component?

    - by James
    I have an object that takes in data and spits out a transformed output, called IBaseItem. I also have two parsers, IParserA and IParserB. These parsers transform external data (in format dataA and dataB respectively) to a format usable by my IBaseItem (baseData). I want to create 2 systems, one that works with dataA and one that works with dataB. They will allow the user to enter data and match it to the right plugins/implementations and transform the data to outData. I want to write these traffic cops myself, but have other people provide the parsers and baseitem logic, and and as such am implementing these items as plugins (hence the use of interfaces). Other programmers can choose to implement 1 or both parsers. Q: How should I structure the way base items and parsers are associated, stored, and loaded into each of my programs? Class Relations: What I've Tried: Initially I though there should be a different dll for each of my 2 traffic cops, that each have a parser and baseitem in them. However, the duplication of baseitem logic doesn't seem right (especially if the base item logic changes). I then thought the base items could all have their own dll, and then somehow associate parsers and baseitems (guids?), but I don't know if implementing the overhead id/association is adding too much complexion.

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • High resolution graphical representation of the Earth's surface

    - by Simon
    I've got a library, which I inherited, which presents a zoomable representation of the Earth. It's a Mercator projection and is constructed from triangles, the properties of which are stored in binary files. The surface is built up, for any given view port, by drawing these triangles in an overlapping fashion to produce the image. The definition of each triangle is the lat/long of the vertices. It looks OK at low values of zoom but looks progressively more ragged as the user zooms in. The view ports are primarily referenced though a rectangle of lat/long co-ordinates. I'd like to replace it with a better quality approach. The problem is, I don't know where to begin researching the options as I am not familiar either with the projections needed nor the graphics techniques used to render them. For example, I imagine that I could acquire high resolution images, say Mercator projections although I'm open to anything, break them into tiles and somehow wrap them onto a graphical representation of a sphere. I'm not asking for "how do I", more where should I begin to understand what might be involved and the techniques I will need to learn. I am most grateful for any "Earth rendering 101" pointers folks might have.

    Read the article

  • Field symbol and Data reference in SAP-ABAP

    - by PDP-21
    If we compare field symbol and data refernece with that of the pointer in C i concluded that :- In C language, Say we declare a variable "var" type "Integer" with default value "5". The variable "var" will be stored some where in the memory and say the memory address which holds this variable is "1000". Now we define a pointer "ptr" and this pointer is assigned to our variable. So, "&ptr" will be "1000" and " *ptr " will be 5. Lets comapre the above situation in SAP ABAP. Here we declare a Field symbol "FS" and assign that to the variable "var". Now my question is what "FS" holds ? I have searched this rigorously in the internet but found out many ABAP consultants have the opinion that FS holds the address of the variable i.e. 1000. But that is wrong. While debugging i found out that fs holds only 5. So fs (in ABAP) is equivalent to *ptr (in C). Please correct me if my understanding is wrong. Now lets declare a data reference "dref" and another filed symbol "fsym" and after creating the data reference we assign the same to field symbol . Now we can do operations on this field symbol. So the difference between data refernec and field symbol is :- in case of field symbol first we will declare a variable and assign it to a field symbol. in case of data reference first we craete a data reference and then assign that to field symbol. Then what is the use of data reference? The same functionality we can achive through field symbol also.

    Read the article

  • How to integrate a PHP CMS with paypal so that only users who completed a payment can register and authenticate?

    - by ibiza
    I am currently using a PHP CMS - cmsmadesimple - in order to create a website where services will be sold. I intend to use Paypal 'Buy Now' buttons in order to offer a few packages that will be renewable every 1-month or every 3-months and that grant access to the secure content of the website for a given period of time. Everything is going well so far but I am somewhat at loss for the user registration process as I have a few constraints I would like to use and it would be nice to automate the process if possible. Here are the constraints : User should be able to register to my website and choose a password himself Only users that paid should be able to register Access permissions should be disabled automatically after the service period if the package is not renewed And here is the process which I am thinking of : User clicks 'buy' on my website User is redirected on Paypal and completes the payment The paypal email used to pay should be returned to my server and somehow stored If it is a new email, user needs to register to my website (else if it is a returning customer, the deactivation flag for payment stopped should be removed to give back access) If a user does not renew his subscription, there should be a deactivation flag automatically set to the email used in order to lock access until next payment. Ideally, no human intervention is needed. What is the best way to implement all this? I am a bit at loss. I found this article that explained a few things and even has a nice code snippet, except that I'm not sure where to plug it. Thanks all

    Read the article

  • Creating, using and managing XML component dictionaries quick tutorials

    - by drrwebber
    XML Component Dictionary capabilities are provided in conjunction with the CAM Editor toolset.  These dictionaries accelerate the development of consistent XML information exchanges using standard sets of dictionary components. The quick tutorials are aimed at showing the 'how to' of the basic capabilities to jump start use of XML dictionaries with the CAM Editor. The collection of dictionary tutorials videos run for a total of approximately 20 minutes.  Each video can be reviewed individually also. Learn how to use the dictionary functions to create dictionaries by harvesting data model components from existing XSD schema, SQL database table schema, or simple Excel / Open Office spreadsheets with tables of components listed.Also included are tips and functions relating to use of NIEM exchange development, IEPD and EIEM techniques.These videos should be viewed in conjunction with reviewing the overall concepts and techniques described in the companion video on the CAM Editor and Dictionaries overview.  The approach is aligned with OASIS and Core Components Technical Specification (CCTS) standards specifications for XML components and dictionaries.Dictionary collections can be stored locally on the file system, or local network, or collaboratively on the web or cloud deployment, or can be shared and managed securely using the Oracle Enterprise Repository (OER) tool. Also included are techniques relating to the use of the NIEM approach for developing XML exchange schema and IEPD packages.  This includes generating reuse scores, wantlist, and cross reference spreadsheets. Included in the latest release of the CAM Editor is the ability to use the analyse dictionary tool to determine duplicate components, conflicting component definitions, missing component descriptions and so on.  This ensures high quality dictionary component specifications.  Using the CAM Editor you can also create MindMap models and UML physical models of your dictionary components sets. For a complete guide to using the CAM Editor see the main YouTube video tutorials website and the CAM Editor website.

    Read the article

  • Help parsing long (3.5mil lines) text file, line by line and storing data, need a strategy

    - by Jarrod
    This is a question about solving a particular problem I am struggling with, I am parsing a long list of text data, line by line for a business app in PHP (cron script on the CLI). The file follows the format: HD: Some text here {text here too} DC: A description here DC: the description continues here DC: and it ends here. DT: 2012-08-01 HD: Next header here {supplemental text} ... this repeats over and over for a few hundred megs I have to read each line, parse out the HD: line and grab the text on this line. I then compare this text against data stored in a database. When a match is found, I want to then record the following DC: lines that succeed the matched HD:. Pseudo code: while ( the_file_pointer_isnt_end_of_file) { line = getCurrentLineFromFile title = parseTitleFrom(line) matched = searchForMatchInDB(line) if ( matched ) { recordTheDCLines // <- Best way to do this? } } My problem is that because I am reading line by line, what is the best way to trigger the script to start saving DC lines, and then when they are finished save them to the database? I have a vague idea, but have yet to properly implement it. I would love to hear the communities ideas\suggestions! Thank you.

    Read the article

  • Best Terminology for a particular php-based site architecture? [closed]

    - by hen3ry
    For a site... o whose overall look-and-feel is generated by one php page ("index.php"). o in which "index.php" provides for all pages served the following required components: The DOCTYPE, opening html tag, the head section, the opening body tag, the end-body tag, and end-html tag. o which uses computed hierarchical navigation menus within "index.php" to offer visitors access to the site content. o in which all content is stored in individual files that contain "headerless html". (The DOCTYPE, etc. etc. being being provided by "index.php" as described above.) Q1: what term best describes this architecture? I'm seeking a concise descriptor that is useful in conversation and definitive as a search term, in whole-web searches, and searching here on Pro Webmasters. Q2: what term best describes the individual content files? Same general goals for the descriptor as above. As you see above, I couldn't avoid using the term "headerless html", my best choice. But this term does not seem to be in general use. I've found some people use this term to describe such as my content files, but others use it quite differently.

    Read the article

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • Algorithmic problem - quickly finding all #'s where value %x is some given value

    - by Steve B.
    Problem I'm trying to solve, apologies in advance for the length: Given a large number of stored records, each with a unique (String) field S. I'd like to be able to find through an indexed query all records where Hash(S) % N == K for any arbitrary N, K (e.g. given a million strings, find all strings where HashCode(s) % 17 = 5. Is there some way of memoizing this so that we can quickly answer any question of this form without doing the % on every value? The motivation for this is a system of N distributed nodes, where each record has to be assigned to at least one node. The nodes are numbered 0 - (K-1) , and each node has to load up all of the records that match it's number: If we have 3 nodes Node 0 loads all records where Hash % 3 ==0 Node 1 loads all records where Hash % 3 ==1 Node 2 loads all records where Hash % 3 ==2 adding a 4th node, obviously all the assignments have to be recomputed - Node 0 loads all records where Hash % 4 ==0 ... etc I'd like to easily find these records through an indexed query without having to compute the mod individually. The best I've been able to come up with so far: If we take the prime factors of N (p1 * p2 * ... ) if N % M == I then p % M == I % p for all of N's prime factors e.g. 10 nodes : N % 10 == 6 then N % 2 = 0 == 6 %2 N % 5 = 1 == 6 %5 so storing an array of the "%" of N for the first "reasonable" number of primes for my data set should be helpful. For example in the above example we store the hash and the primes HASH PRIMES (array of %2, %3, %5, %7, ... ]) 16 [0 1 1 2 .. ] so looking for N%10 == 6 is equivalent to looking for all values where array[1]==1 and array[2] == 1. However, this breaks at the first prime larger than the highest number I'm storing in the factor table. Is there a better way?

    Read the article

  • Is encoding needed in this decryption?

    - by Lijo
    I have a Encryption – Decryption scenario as shown below. //[Clear text ID string as input] -- [(ASCII GetByte) + Encoding] -- [Encrption as byte array] -- [Database column is in VarBinary] -- [Pass byte[] as VarBinary parameter to SP for comparison] //[ID stored as VarBinary in Database] -- [Read as byte array] -- [(Decrypt as byte array) + Encoding + (ASCII Get String)] -- Show as string in the UI My question is in the decryption scenario. After decryption I get a byte array. I am doing an encoding (IBM037) after that. Is it correct? Is there something wrong in the flow shown above? private static byte[] GetEncryptedID(string id) { Interface_Request input = new Interface_Request(); input.RequestText = Encodeto64(id); input.RequestType = Encryption; ProgramInterface inputRequest = new ProgramInterface(); inputRequest.Test_Trial_Request = input; using (KTestService operation = new KTestService()) { return ((operation.KTrialOperation(inputRequest)).Test_Trial_Response.ResponseText); } } private static string GetDecryptedID(byte[] id) { Interface_Request input = new Interface_Request(); input.RequestText = id; input.RequestType = Decryption; ProgramInterface request = new ProgramInterface(); request.Test_Trial_Request = input; using (KTestService operationD = new KTestService()) { ProgramInterface1 response = operationD.KI014Operation(request); byte[] decryptedValue = response.ICSF_AES_Response.ResponseText; Encoding sourceByteFormat = Encoding.GetEncoding("IBM037"); Encoding destinationByteFormat = Encoding.ASCII; //Convert from one byte format to other (IBM to ASCII) byte[] ibmEncodedBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat,decryptedValue); return System.Text.ASCIIEncoding.ASCII.GetString(ibmEncodedBytes); } } private static byte[] EncodeTo64(string toEncode) { byte[] dataInBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode); Encoding destinationByteFormat = Encoding.GetEncoding("IBM037"); Encoding sourceByteFormat = Encoding.ASCII; //Convert from one byte format to other (ASCII to IBM) byte[] asciiBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat, dataInBytes); return asciiBytes; }

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >