Search Results

Search found 3169 results on 127 pages for 'identity ranges'.

Page 96/127 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Managing accounts on a private website for a real-life community

    - by Smudge
    I'm looking at setting-up a walled-in website for a real-life community of people, and I was wondering if anyone has any experience with managing member accounts for this kind of thing. Some conditions that must be met: This community has a set list of real-life members, each of whom would be eligible for one account on the website. We don't expect or require that they all sign-up. It is purely opt-in, but we anticipate that many of them would be interested in the services we are setting up. Some of the community members emails are known, but some of them have fallen off the grid over the years, so ideally there would be a way for them to get back in touch with us through the public-facing side of the site. (And we'd want to manually verify the identity of anyone who does so). Their names are known, and for similar projects in the past we have assigned usernames derived from their real-life names. This time, however, we are open to other approaches, such as letting them specify their own username or getting rid of usernames entirely. The specific web technology we will use (e.g. Drupal, Joomla, etc) is not really our concern right now -- I am more interested in how this can be approached in the abstract. Our database already includes the full member roster, so we can email many of them generated links to a page where they can create an account. (And internally we can require that these accounts be paired with a known member). Should we have them specify their own usernames, or are we fine letting them use their registered email address to log-in? Are there any paradigms for walled-in community portals that help address security issues if, for example, one of their email accounts is compromised? We don't anticipate attempted break-ins being much of a threat, because nothing about this community is high-profile, but we do want to address security concerns. In addition, we want to make the sign-up process as painless for the members as possible, especially given the fact that we can't just make sign-ups open to anyone. I'm interested to hear your thoughts and suggestions! Thanks!

    Read the article

  • Java Cloud Service for developers

    - by JuergenKress
    The advent of cloud computing has reinvented application development for many companies. “That’s the beauty of the cloud,” says Cameron Purdy, vice president of development, Oracle. “It dramatically improves developer productivity because they can do what they do best without having to manage complex development, testing, staging, and production environments.” The key is to find a platform that doesn’t impose proprietary restrictions or force developers to learn new tools. For example, Oracle Java Cloud Service is an enterprise-grade platform as a service for building and deploying Java EE, Oracle WebLogic Server, and Oracle Application Development Framework (Oracle ADF) applications. “It’s designed to be flexible and easy to use,” says Purdy. “And it is also a standards-based solution -it’s not proprietary and there is no cloud lock-in. Developers get instant access to an enterprise-grade environment for a simple, monthly subscription.” Oracle Java Cloud Service instances are created with just a few clicks, so businesses can create a rich application development environment within minutes. Running on Oracle WebLogic Server and Oracle Exalogic, the underlying infrastructure also leverages Oracle Fusion Middleware’s integration with common services. For example, instances come integrated and preconfigured with optimized Oracle Database and Oracle Identity Management configurations. Based on Oracle Enterprise Manager, the Oracle Java Cloud Service console lets customers easily manage and monitor their Oracle Java Cloud Service instances. The open nature of the Oracle Java Cloud Service lets developers integrate through Web services such as SOAP and REST APIs, as well as use their favorite developer tools, whether they are out-of-the-box tools such as Maven and Ant or the productivity features built into Oracle JDeveloper, Oracle Enterprise Pack for Eclipse, or NetBeans IDE. The service allows for the seamless movement of applications between on-premise Oracle WebLogic Server domains and instances of Oracle Java Cloud Service within Oracle Cloud. This approach allows flexibility to mix and match the use of on-premise environments with cloud instances for development, test, and production environments. Visit to learn more and watch videos about Oracle Java Cloud Service. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: java,cloud,oracle cloud,java cloud,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Why is my primitive xna square not drawn/shown?

    - by Mech0z
    I have made this class to draw a rectangle, but I cant get it to be drawn, I have no issues displaying a 3d model created in 3dmax, but shown these primitives seems much harder I use this to create it board = new Board(Vector3.Zero, 1000, 1000, Color.Yellow); And here is the implementation using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Shapes; using Quadro.Models; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace Quadro { public class Board : IGraphicObject { //Private Fields private Vector3 modelPosition; private BasicEffect effect; private VertexPositionColor[] vertices; private Matrix rotationMatrix; private GraphicsDevice graphicsDevice; private Matrix cameraProjection; //Constructor public Board(Vector3 position, float length, float width, Color color) { var _color = color; vertices = new VertexPositionColor[6]; vertices[0].Position = new Vector3(position.X, position.Y, position.Z); vertices[1].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[2].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[3].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[4].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[5].Position = new Vector3(position.X + length, position.Y + width, position.Z); for(int i = 0; i < vertices.Length; i++) { vertices[i].Color = color; } initFields(); } private void initFields() { graphicsDevice = SharedGraphicsDeviceManager.Current.GraphicsDevice; effect = new BasicEffect(graphicsDevice); modelPosition = Vector3.Zero; float screenWidth = (float)graphicsDevice.Viewport.Width; float screenHeight = (float)graphicsDevice.Viewport.Height; float aspectRatio = screenWidth / screenHeight; this.cameraProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); this.rotationMatrix = Matrix.Identity; } //Public Methods public void Update(GameTimerEventArgs e) { } public void Draw(Vector3 cameraPosition, GameTimerEventArgs e) { Matrix cameraView = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); effect.World = rotationMatrix * Matrix.CreateTranslation(modelPosition); effect.View = cameraView; effect.Projection = cameraProjection; graphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2, VertexPositionColor.VertexDeclaration); } } public void Rotate(Matrix rotationMatrix) { this.rotationMatrix = rotationMatrix; } public void Move(Vector3 moveVector) { this.modelPosition += moveVector; } } }

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

  • XNA 4.0: 2D Camera Y and X are going in wrong direction

    - by Setheron
    I asked this question on stackoverflow but assumed this might be a better area to ask it as well for a more informed answer. My problem is that I am trying to create a camera class and have it so that my camera follows the proper RHS, however the Y axis seems to be inverted since on the screen the 0 starts at the top. Here is my Camera2D Class: class Camera2D { private Vector2 _position; private float _zoom; private float _rotation; private float _cameraSpeed; private Viewport _viewport; private Matrix _viewMatrix; private Matrix _viewMatrixIverse; public static float MinZoom = float.Epsilon; public static float MaxZoom = float.MaxValue; public Camera2D(Viewport viewport) { _viewMatrix = Matrix.Identity; _viewport = viewport; _cameraSpeed = 4.0f; _zoom = 1.0f; _rotation = 0.0f; _position = Vector2.Zero; } public void Move(Vector2 amount) { _position += amount; } public void Zoom(float amount) { _zoom += amount; _zoom = MathHelper.Clamp(_zoom, MaxZoom, MinZoom); UpdateViewTransform(); } public Vector2 Position { get { return _position; } set { _position = value; UpdateViewTransform(); } } public Matrix ViewMatrix { get { return _viewMatrix; } } private void UpdateViewTransform() { Matrix proj = Matrix.CreateTranslation(new Vector3(_viewport.Width * 0.5f, _viewport.Height * 0.5f, 0)) * Matrix.CreateScale(new Vector3(1f, 1f, 1f)); _viewMatrix = Matrix.CreateRotationZ(_rotation) * Matrix.CreateScale(new Vector3(_zoom, _zoom, 1.0f)) * Matrix.CreateTranslation(_position.X, _position.Y, 0.0f); _viewMatrix = proj * _viewMatrix; } } I test it using SpriteBatch in the following way: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Vector2 position = new Vector2(0, 0); // TODO: Add your drawing code here spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null, camera.ViewMatrix); Texture2D circle = CreateCircle(100); spriteBatch.Draw(circle, position, Color.Red); spriteBatch.End(); base.Draw(gameTime); }

    Read the article

  • Internet Explorer keeps asking for NTLM credentials in Intranet zone

    - by Tomalak
    Long text, sorry for that. I'm trying to be as specific as possible. I'm on Windows 7 and I experience a very frustrating Internet Explorer 8 behavior. I'm in a company LAN with some intranet servers and a proxy for connecting with the outside world. On sites that are clearly recognized as being "Local Intranet" (as indicated in the IE status bar) I keep getting "Windows Security" dialog boxes that ask me to log in. These pages are served off an IIS6 with "Integrated Windows Security" enabled, NTFS permits Everyone:Read on the files themselves. If I enter my Windows credentials, the page loads fine. However, the dialog boxes will be popping up the next time, regardless if I ticked "Remember my credentials" or not. (Credentials are stored in the "Credential Manager" but that does not make any difference as to how often these login boxes appear.) If I click "Cancel", one of two things can happen: Either the page loads with certain resources missing (images, styleheets, etc), or it does not load at all and I get HTTP 401.2 (Unauthorized: Logon Failed Due to Server Configuration). This depends on whether the logon box was triggered by the page itself, or a referenced resource. The behavior appears to be completely erratic, sometimes the pages load smoothly, sometimes one resource triggers a logon message, sometimes it does not. Even simply re-loading the page can result in changed behavior. I'm using WPAD as my proxy detection mechanism. All Intranet hosts do bypass the proxy in the PAC file. I've checked every IE setting I can think of, entered host patterns, individual host names, IP ranges in every thinkable configuration to the "Local Intranet" zone, ticked "Include all sites that bypass the proxy server", you name it. It boils down to "sometimes it just does not work", and slowly I'm losing my mind. ;-) I'm aware that this is related to IE not automatically passing my NTLM credentials to the webserver but asking me instead. Usually this should only happen for NTLM-secured sites that are not recognized as being in the "Intranet" zone. As explained, this is not the case here. Especially since half of a page can load perfectly and without interruption and some page's resources (coming from the same server!) trigger the login message. I've looked at http://support.microsoft.com/kb/303650, which gives the impression of describing the problem, but nothing there seems to work. And frankly, I'm not certain if "manually editing the registry" is the right solution for this kind of problem. I'm not the only person in the world with an IE/intranet/IIS configuration, after all. I'm at a loss, can somebody give me a hint?

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • Odd squid transparent redirect behavior

    - by EMiller
    This is the first time I've set up squid. It's running a redirect script that does some text search/replace on html pages, and then saves them to a location on the same machine on the nginx path - then issues the redirect to that URL (it's an art project :D). The relevant lines in squid.conf are http_port 3128 transparent redirect_program /etc/squid/jefferson_redirect.py The jefferson_redirect.py script is based on this script: http://gofedora.com/how-to-write-custom-redirector-rewritor-plugin-squid-python/ The issue: I'm getting strange http redirect behavior. For example, here is the normal request/response from a PHP script that issues a header("Location:"); - a 302 redirect: http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.1 302 Found Date: Tue, 13 Apr 2010 05:15:43 GMT Server: Apache X-Powered-By: PHP/5.2.11 Location: http://www.google.com/search?q=yreka Content-Type: text/html Vary: User-Agent,Accept-Encoding Content-Encoding: gzip Content-Length: 2108 Keep-Alive: timeout=3, max=100 Connection: Keep-Alive Here's what it looks like when running through the squid proxy (note that "redirector.mysite.com" is not the site running squid or nginx): http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive If-Modified-Since: Tue, 13 Apr 2010 05:21:02 GMT HTTP/1.0 200 OK Server: nginx/0.7.62 Date: Tue, 13 Apr 2010 05:21:10 GMT Content-Type: text/html Content-Length: 17865 Last-Modified: Tue, 13 Apr 2010 05:21:10 GMT Accept-Ranges: bytes X-Cache: MISS from jefferson X-Cache-Lookup: HIT from jefferson:3128 Via: 1.1 jefferson:3128 (squid/2.7.STABLE6) Connection: keep-alive Proxy-Connection: keep-alive It is basically working - but the URL http://redirector.mysite.com/?unicmd=g+yreka remains unchanged, while displaying the google page (mostly broken as it's using URLs relative to redirector.mysite.com) I've experienced a similar thing with google results pages: when clicking to another page from google, I get a google URL, with the other site's content. Sorry for the long post - many thanks if you've read this far! Any ideas?

    Read the article

  • Mikrotik and NAT/Routing issue

    - by arul
    I have basic NAT/Routing problem with Mikrotik RB750 that I've been unable to solve over the past days. From our ISP we have 26 IP addresses: 10.10.10.192/27, with 10.10.10.193 being the gateway and 10.10.10.194 the first available IP. What I need is that everything connected to ether2 gets a public IP from the DHCP server, and everything connected to ether3 gets a local IP from another DHCP (192.168.100.0/24). All clients should have internet access (I'll figure out bandwidth throttling later) and optimally just 'see' each other (all boxes are Win7, I guess this can ultimately be handled with VPN). Here is my setup: ether1 (10.10.10.194) is connected directly to ISP. 20 clients connected to ether2(10.10.10.195), and another 20 to ether3(10.10.10.196) (both through same 24 port switches). This is my setup, which doesn't work, all 20 clients from ether2 can access the internet, though all comm. seems to come from 10.10.10.194 (is this due to the masquerade on ether1?), and ether3 can't access the internet at all. I think that I need to masquerade ether3, and SNAT/DNAT or NETMAP ether2, but that doesn't work either, I guess that I need to somehow 'wire' both ether2+3 to ether1. Address list: # ADDRESS NETWORK INTERFACE 0 ;;; public 10.10.10.194/32 10.10.10.192 ether1-gateway 1 ;;; inner DHCP 192.168.100.0/24 192.168.100.0 ether3-private 2 ;;; public 10.10.10.195/32 10.10.10.192 ether2-pub 3 ;;; public 10.10.10.196/32 10.10.10.192 ether3-private NAT 0 ;;; ether3 nat chain=srcnat action=src-nat to-addresses=10.10.10.196 src-address=192.168.100.0/24 out-interface=ether3-private 1 ;;; ether3 nat chain=dstnat action=dst-nat to-addresses=192.168.100.0/24 in-interface=ether3-private 2 ;;; ether1 masquerade chain=srcnat action=masquerade to-addresses=10.10.10.194 out-interface=ether1-gateway Routes: # DST-ADDRESS PREF-SRC GATEWAY DISTANCE 0 A S 0.0.0.0/0 ether1-gateway 1 2 A S 10.10.10.192/27 10.10.10.195 ether2-pub 1 3 ADC 10.10.10.192/32 10.10.10.195 ether2-pub 0 ether1-gateway ether3-private 4 ADC 192.168.100.0/24 192.168.100.0 ether3-private 0 IP Pools: # NAME RANGES 0 public-pool 10.10.10.201-10.10.10.220 1 private-pool 192.168.100.2-192.168.100.254 DHCP configs: # NAME INTERFACE RELAY ADDRESS-POOL LEASE-TIME ADD-ARP 0 public-dhcp ether2-pub public-pool 3d 1 private-dhcp ether3-private private-pool 3d Thanks!

    Read the article

  • User given a login prompt when closing Word documents after viewing them in IE7

    - by Martin Owen
    When using IE7 to view Word documents on our CRM system (an ASP.NET 2.0 application running on Windows Server 2003 and IIS 6 and using Windows authenticaton) I'm finding that a prompt appears when the user closes the document. The Word document is originally opened by clicking a link in the CRM system. Are there permissions that I can set on the folder containing the Word documents to prevent this prompt? I've already tried only allowing the Read permission for the Users group (I've left Administrators with Full Control.) If there's another solution to this without using permissions please let me know. UPDATE: I ran Fiddler as suggested by JD and here is the output from the two responses after the request for the document. The first seems to be a DAV response and the second is the authentication request. How do I prevent the DAV response and just return the .doc on the server? OPTIONS / HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 200 OK Date: Thu, 18 Feb 2010 13:37:36 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET MS-Author-Via: DAV Content-Length: 0 Accept-Ranges: none DASL: <DAV:sql> DAV: 1, 2 Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK Cache-Control: private ------------------------------------------------------------------ OPTIONS /docs/ZONE%20100-105.doc HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 401 Unauthorized Content-Length: 83 Content-Type: text/html Server: Microsoft-IIS/6.0 WWW-Authenticate: Basic realm="<REMOVED>" X-Powered-By: ASP.NET Date: Thu, 18 Feb 2010 13:37:36 GMT ------------------------------------------------------------------ UPDATE 2: I found a potential workaround for the problem via this post: http://forums.iis.net/p/1149091/1868317.aspx. I moved all of the documents that are being requested into a folder outside of the web root, and created a virtual directory for it (also outside of the web root). When I followed a link to one of the documents in IE and then closed the document I wasn't presented with a login prompt. I should point out that I'm not using FPSE, unlike the person in the forum post. Ideally I don't want to have to put the documents in a separate virtual directory, but this is the simplest solution I've found so far.

    Read the article

  • How can I verify that my SSD is performing as it should?

    - by Jon Skeet
    EDIT: Okay, so I've no idea what caused the change, but after trying loads of different things to work out what was wrong, I've rerun the WEI (about the 4th time in total) and the score has jumped to a far more respectable 7.3. I'm going to leave well alone now :) I've got a brand new 256GB SSD (Crucial CT256M225) which should have stellar performance. However, on my (also brand new) Dell Studio 1557 with Windows 7 Professional 64 bit, it's only giving a performance index of 5.9. I realise the performance index should be taken with a bit of a pinch of salt, but I wonder whether something's wrong. Given this paragraph from this MSDN article on Windows 7, I'd expect to see a high 6.X or possible a 7.X figure: In Windows 7, there are new random read, random write and flush assessments. Better SSDs can score above 6.5 all the way to 7.9. To be included in that range, an SSD has to have outstanding random read rates and be resilient to flush and random write workloads. In the Beta timeframe of Windows 7, there was a capping of scores at 1.9, 2.9 or the like if a disk (SSD or HDD) didn’t perform adequately when confronted with our random write and flush assessments. Feedback on this was pretty consistent, with most feeling the level of capping to be excessive. As a result, we now simply restrict SSDs with performance issues from joining the newly added 6.0+ and 7.0+ ranges. SSDs that are not solid performers across all assessments effectively get scored in a manner similar to what they would have been in Windows Vista, gaining no Win7 boost for great random read performance. How can I diagnose any performance issues with either the disk or how Windows 7 is handling it? Are there any particularly good tools you'd recommend? One note of curiosity: I couldn't install the firmware update (to 1916) until I changed my BIOS handling of the drive to ATA mode; after installing the firmware I tried to boot the Windows installation DVD - but that only worked after turning it back to AHCI mode (which I've left it in). Installing Windows 7 took longer than I expected - it sat at the "Windows is loading files" prompt for a very long time. Likewise it was on "Expanding files (0%)" for a long time. Since installation it's been fine though - but I don't know whether it's really providing quite as beefy performance as it should. EDIT: My netbook with the 64GB equivalent drive has a performance index of 6.6...

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • Secure openVPN using IPTABLES

    - by bob franklin smith harriet
    Hey, I setup an openVPN server and it works ok. The next step is to secure it, I opted to use IPTABLES to only allow certain connections through but so far it is not working. I want to enable access to the network behind my openVPN server, and allow other services (web access), when iptables is disabaled or set to allow all this works fine, when using my following rules it does not. also note, I already configured openVPN itself to do what i want and it works fine, its only failing when iptables is started. Any help to tell me why this isnt working will appreciated here. These are the lines that I added in accordance with openVPN's recommendations, unfortunately testing these commands shows that they are requiered, they seem incredibly insecure though, any way to get around using them? # Allow TUN interface connections to OpenVPN server -A INPUT -i tun+ -j ACCEPT #allow TUN interface connections to be forwarded through other interfaces -A FORWARD -i tun+ -j ACCEPT # Allow TAP interface connections to OpenVPN server -A INPUT -i tap+ -j ACCEPT # Allow TAP interface connections to be forwarded through other interfaces -A FORWARD -i tap+ -j ACCEPT These are the new chains and commands i added to restrict access as much as possible unfortunately with these enabled, all that happens is the openVPN connection establishes fine, and then there is no access to the rest of the network behind the openVPN server note I am configuring the main iptables file and I am paranoid so all ports and ip addresses are altered, and -N etc appears before this so ignore that they dont appear. and i added some explanations of what i 'intended' these rules to do, so you dont waste time figuring out where i went wrong : 4 #accepts the vpn over port 1192 -A INPUT -p udp -m udp --dport 1192 -j ACCEPT -A INPUT -j INPUT-FIREWALL -A OUTPUT -j ACCEPT #packets that are to be forwarded from 10.10.1.0 network (all open vpn clients) to the internal network (192.168.5.0) jump to [sic]foward-firewall chain -A FORWARD -s 10.10.1.0/24 -d 192.168.5.0/24 -j FOWARD-FIREWALL #same as above, except for a different internal network -A FORWARD -s 10.10.1.0/24 -d 10.100.5.0/24 -j FOWARD-FIREWALL # reject any not from either of those two ranges -A FORWARD -j REJECT -A INPUT-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT-FIREWALL -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT-FIREWALL -j REJECT -A FOWARD-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT #80 443 and 53 are accepted -A FOWARD-FIREWALL -m tcp -p tcp --dport 80 -j ACCEPT -A FOWARD-FIREWALL -m tcp -p tcp --dport 443 -j ACCEPT #192.168.5.150 = openVPN sever -A FOWARD-FIREWALL -m tcp -p tcp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -m udp -p udp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -j REJECT COMMIT now I wait :D

    Read the article

  • ISC DHCP - Force clients to get a new IP address, instead of the being re-issued their previous lease's IP

    - by kce
    We are in the middle of a migration of our DHCP and DNS services from a Debian-based server to a Windows Server 2008 R2 implementation. The Debian server is running isc-dhcpd-V3.1.1. All of workstations are configured to have fixed-addresses between .3 and .40 (the motivation behind that choice is mostly management/political much like here). DHCP leases are given out in the range of .100 to .175. Statically configured servers live in the .200 block and above (which is mostly empty). When we move to the Windows platform, management/political considerations require me to move the IP ranges around again. We would like to keep .1 - .10 reserved for network appliances, switches, and other infrastructure. .200 will remain designated for servers. The addressing space in between should be available to clients and IPs should be dynamically allocated (Edit: instead of automatic as originally mentioned) by the server. My Address Pool on the Windows Server looks like this: 192.168.0.1 192.168.0.254 (Address range for distribution) 192.168.0.1 192.168.0.10 (IP addresses excluded from distribution) 192.168.0.200 192.168.0.254 (IP addresses excluded from distribution) Currently, we have all of our clients still on the .3 - .40 range, and a few machines still active in the .100 - .175 (although there are lots devices that are powered off that still have expired leases with IPs from that range). Since the lease "database" isn't shared between the old and new DHCP server how can I prevent clients from receiving a lease with an IP address that is currently being held by client with a non-expired lease from the old DHCP server? If I just expand the range on the Debian DHCP server to be 192.168.0.10 - 192.168.0.199 is there a way to force clients to not re-use their old IP address when they send their DHCPDISCOVER? Can I make the Windows DHCP server be authoritiative like the ISC implementation? The dhcpd.conf from the Debian server: ddns-update-style none; authoritative; default-lease-time 43200; #12 hours max-lease-time 86400; #24 hours subnet 192.168.0.0 netmask 255.255.255.0 { option routers 192.168.0.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; range 192.168.0.100 192.168.0.175; } host workstation-1 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.3; } ... and so on until 192.168.0.40

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

  • Can't Access TFS 2010 Beta 2 from Visual Studio 2010 Beta 2 when domain joined

    - by Brian Sullivan
    I'm experimenting with an installation of TFS 2010 Beta 2 on a virtual machine under VirtualBox running Windows Server 2008. When I've got the server in a workgroup, I can connect to it from Visual Studio just fine, as long as I provide credentials for a local user on the server machine when prompted by the "Connect to Team Foundation Server" dialog. The desktop I'm running Visual Studio on is joined to a domain. However, when I join the server to the domain, I can no longer connect to it from Visual Studio. I get a pretty generic error message: "TF31002 - Unable to connect to team foundation server". It gives me several different possible problems, including an incorrect address or an incorrect username and password. I've already added the domain Windows identity with which I'm logged on the the desktop to the TFS Admins group on the server, so I don't think it's a username/password problem. I've also tried putting the literal IP address of the server in the dialog address box instead of the machine name, but still no dice. I made sure that network discovery was enabled on the server, too, and can navigate to "\\webserver2008" in Windows Explorer without any problems. Shouldn't be a firewall problem, since the TFS install creates the appropriate exceptions in Windows Firewall. It's all a bit confusing, since it seems to work when the server is in a workgroup. Note: I'm a dev, not an admin, so there are many subtleties of server administration with which I'm not familiar. Please make no assumptions about what I may or may not have tried; what may be obvious to you may have never occurred to me. Thanks in advance!

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot

    - by cparker4486
    Hi, I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

  • pGina with automatic Kerberos ticket and OpenAFS token/ticket

    - by rolands
    I am currently updating our educational Windows lab images from XP to 7, In doing so we are also migrating from Comtarsia to pGina. Unfortunately somewhere in the transition our automation that fetched kerberos and OpenAFS tickets/tokens on login has completely stopped functioning. Basically what used to happen was, using kfw-3.2.2 and the old OpenAFS release (loopback adapter days), either comtarsia would share password or something with the NIM (Network Identity Manager) which would authenticate against the kerberos server gaining a ticket and AFS token needed to access the users file, this was aided by the fact that our ldap database that windows authenticates against is also what kerberos uses to authenticate so usernames/passwords are the same across both services. I have set up all of the tools, albeit newer 64bit versions which seem to have given me less trouble than the previous releases of NIM/OpenAFS/Krb5, as well as setting their configurations back to what we used to use. Unfortunately this seems to be fubar'd in some way, instead all we get now is a OpenAFS token, most likely I assume from the AFScreds tool which operates some kind of integrated login process, although this does not help in getting a kerberos ticket or a afs ticket for which a login box is provided be NIM after the user logs in. Does anyone know IF it is possible to do what we are trying, and if so how? I was considering writing a pGina plugin which would interact with the server itself but this seems slightly like overkill considering that all these applications already exist...

    Read the article

  • What’s Your Tax Strategy? Automate the Tax Transfer Pricing Process!

    - by tobyehatch
    Does your business operate in multiple countries? Well, whether you like it or not, many local and international tax authorities inspect your tax strategy.  Legal, effective tax planning is perceived as a “moral” issue. CEOs are being asked to testify on their process of tax transfer pricing between multinational legal entities.  Marc Seewald, Senior Director of Product Management for EPM Applications specializing in all tax subjects and Product Manager for Oracle Hyperion Tax Provisioning, and Bart Stoehr, Senior Director of Product Strategy for Oracle Hyperion Profitability and Cost Management joined me for a discussion/podcast on this interesting subject.  So what exactly is “tax transfer pricing”? Marc defined it this way. “Tax transfer pricing is a profit allocation methodology required to be used by multinational corporations. Specifically, the ultimate goal of the transfer pricing is to ensure that the global multinational pays their fair share of income tax in each of their local markets. Specifically, it prevents companies from unfairly moving profit from ‘high tax’ countries to ‘low tax’ countries.” According to Marc, in today’s global economy, profitability can be significantly impacted by goods and services exchanged between the related divisions within a single multinational company.  To ensure that these cost allocations are done fairly, there are rules that govern the process. These rules ensure that intercompany allocations fairly represent the actual nature of the businesses activity- as if two divisions were unrelated - and provide a clear audit trail of how the costs have been allocated to prove that allocations fall within reasonable ranges.  What are the repercussions of improper tax transfer pricing? How important is it? Tax transfer pricing allocations can materially impact the amount of overall corporate income taxes paid by a company worldwide, in some cases by hundreds of millions of dollars!  Since so much tax revenue is at stake, revenue agencies like the IRS, and international regulatory bodies like the Organization for Economic Cooperation and Development (OECD) are pushing to reform and clarify reporting for tax transfer pricing. Most recently the OECD announced an “Action Plan for Base Erosion and Profit Shifting”. As Marc explained, the times are changing and companies need to be responsive to this issue. “It feels like every other week there is another company being accused of avoiding taxes,” said Marc. Most recently, Caterpillar was accused of avoiding billions of dollars in taxes. In the last couple of years, Apple, GE, Ikea, and Starbucks, have all been accused of tax avoidance. It’s imperative that companies like these have a clear and auditable tax transfer process that enables them to justify tax transfer pricing allocations and avoid steep penalties and bad publicity. Transparency and efficiency are what is needed when it comes to the tax transfer pricing process. Bart explained that tax transfer pricing is driving a deeper inspection of profit recognition specifically focused on the tax element of profit.  However, allocations needed to support tax profitability are nearly identical in process to allocations taking place in other parts of the finance organization. For example, the methods and processes necessary to arrive at tax profitability by legal entity are no different than those used to arrive at fully loaded profitability for a product line. In fact, there is a great opportunity for alignment across these two different functions.So it seems that tax transfer pricing should be reflected in profitability in general. Bart agreed and told us more about some of the critical sub-processes of an overall tax transfer pricing process within the Oracle solution for tax transfer pricing.  “First, there is a ton of data preparation, enrichment and pre-allocation data analysis that is managed in the Oracle Hyperion solution. This serves as the “data staging” to the next, critical sub-processes.  From here, we leverage the Oracle EPM platform’s ability to re-use dimensions and legal entity driver data and financial data with Oracle Hyperion Profitability and Cost Management (HPCM).  Within HPCM, we manage the driver data, define the legal entity to legal entity allocation rules (like cost plus), and have the option to test out multiple, simultaneous tax transfer pricing what-if scenarios.  Once processed, a tax expert can evaluate the effectiveness of any one scenario result versus another via a variance analysis configured with HPCM’s pre-packaged reporting capability known as Oracle Hyperion SmartView for Office.”   Further, Bart explained that the ability to visibly demonstrate how a cost or revenue has been allocated is really helpful and auditable.  “HPCM’s Traceability Maps are that visual representation of all allocation flows that have been executed and is the tax transfer analyst’s best friend in maintaining clear documentation for tax transfer pricing audits. Simply click and drill as you inspect the chain of allocation definitions and results. Once final, the post-allocated tax data can be compared to the GL to create invoices and journal entries for posting to your GL system of choice.  Of course, there is a framework for overall governance of the journal entries, allocation percentages, and reporting to include necessary approvals.” Lastly, Marc explained that the key value in using the Oracle Hyperion solution for tax transfer pricing is that it keeps everything in alignment in one single place. Specifically, Oracle Hyperion effectively becomes the single book of record for the GAAP, management, and the tax set of books. There are many benefits to having one source of the truth. These include EFFICIENCY, CONTROLS and TRANSPARENCY.So, what’s your tax strategy? Why not automate the tax transfer pricing process!To listen to the entire podcast, click here.To learn more about Oracle Hyperion Profitability and Cost Management (HPCM), click here.

    Read the article

  • Gmail: security warning icon

    - by Notetaker
    Hello, I just enabled some Gmail Labs programs in my Gmail account, and then I noticed the orange triangle icon with an exclamation mark in it at the end of the address bar of my Google Chrome browser. Clicking on it brought forth a "Security Information' dialog box, with the following messages: "--mail.google.com The identity of website has been verified by Thawlte SGC CA. --Your connection to mail.google.com is encrypted with 128-bit encryption. However, this page includes other resources which are not secure. These resources can be viewed by others while in transit, and can be modified by an attacker to change the look or behavior of the page." I then logged into two of my other Gmail accounts, one of which has no Gmail Labs programs enabled, and the other with 1 program enabled quite some time ago, both with the same result as above (i.e., with the appearance of the orange triangle warning sign in the address bar). I don't remember seeing the orange triangle before, but I'm not sure if it has ever appeared or not. I have "Always use https" enabled for my Gmail accounts. My questions are: Is there a way to identify and remove these un-secure "resources"? (Could enabling Gmail Labs programs have brought these on?) Meanwhile, are my Gmail accounts compromised and unsafe to use? If so, what should I being doing about that now? After this problem is solved, would I need to reset the password to my Gmail accounts, and/or take any other measures to restore their security? Many thanks for answering my questions!

    Read the article

  • Can I change the "From" name and address without going into Mail.app's preferences?

    - by Arjan van Bentem
    Can I somehow add an editable "From" field to Mail.app, a bit like Virtual Identity for Thunderbird, just like I could change the "Reply To" address on the fly? In Mail.app, one can set up multiple email addresses for a single account by just entering a comma-seperated list like [email protected], [email protected]. Next, when composing a message, Mail offers a dropdown to select a "From" address. And when replying†, it automatically selects the right address if it can find a match: Nice, but I'd like to be able to change the "From" on the fly, without going into the Account Information. Also, in previous versions of Mail one could even specify multiple Full Names for a single account: Email Address: Arjan <[email protected]>, Arjan on SU <[email protected]> But nowadays, Mail only uses the setting of Full Name, and even ignores names in the Email Address field when no value for Full Name is set at all. Hence, it would be great if I could change the Full Name on the fly as well. I had no luck finding a plugin for Mail yet. † When using sub-addressing, anything that is sent to [email protected] is simply delivered to [email protected]. I'd then like to reply with the same full address, rather than just [email protected] if Mail cannot find a match, without going into the Account Information first. I sometimes also want to compose a new message with a new sub-addressing-address.

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • Staying anonymous while hosting your site?

    - by jamesCroft
    I don't mean anonymous surfing. I mean hosting and having your own domain and such. The reason is that my blog is about religious/political topics which may cause me trouble in the future. This is the domain I am working on: www.james-croft.com I know that using Whois search my name can come up: http://www.networksolutions.com/whois-search/james-croft.com The solution to that, as far as I understood, is to buy a privacy package from the domain registrar. in my case it is lucky register: http://i.stack.imgur.com/uvOdc.png Also hosting is a concern. I use the same hosting service for multiple websites. My question is this: Can my hosting be tracked and be used to identify me? Also: Are there other methods of finding out my identity from either Google Adsense or Amazon affiliate programs? I couldn't find any relevant articles online. If there is anything else that is relevant, please let me know. I appreciate any response.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >