Search Results

Search found 17754 results on 711 pages for 'field description'.

Page 403/711 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Searching integer sequences

    - by David Gibson
    I have a fairly complex search problem that I've managed to reduce to the following description. I've been googling but haven't been able to find an algorithm that seems to fit my problem cleanly. In particular the need to skip arbitrary integers. Maybe someone here can point me to something? Take a sequence of integers A, for example (1 2 3 4) Take various sequences of integers and test if any of them match A such that. A contains all of the integers in the tested sequence The ordering of the integers in the tested sequence are the same in A We don't care about any integers in A that are not in the test sequence We want all matching test sequences, not just the first. An example A = (1 2 3 4) B = (1 3) C = (1 3 4) D = (3 1) E = (1 2 5) B matches A C matches A D does not match A as the ordering is different E does not match A as it contains an integer not in A I hope that this explanation is clear enough. The best I've managed to do is to contruct a tree of the test sequences and iterate over A. The need to be able to skip integers leads to a lot of unsuccessful search paths. Thanks Reading some suggestions I feel that I have to clarify a couple of points that I left too vague. Repeated numbers are allowed, in fact this is quite important as it allows a single test sequence to match A is multiple ways A = (1234356), B = (236), matches could be either -23---6 or -2--3-6 I expect there to be a very large number of test sequences, in the thousands at least and sequence A will tend to have a max length of maybe 20. Thus simply trying to match each test sequence one by one by iterating becomes extremely inefficient. Sorry if this wasn't clear.

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • EBS Customer Relationship Manager (CRM) Product Family Webcasts

    - by user793044
    Oracle's Advisor Webcasts are live presentations given by subject matter experts who deliver knowledge and information about services, products, technologies, best practices and more. Delivered through WebEx the Oracle Advisor Webcast Program brings interactive expertise straight to your desktop, at no cost. Each session is usually followed by a live Q&A where you can have your questions answered. If you miss any of the live webcasts then you can replay the recording or download the PDF of the presentation. Doc Id 740966.1 gives you access to all the scheduled webcasts as well as the archived recordings and presentations. Just select the product family you are interested in to access the latest webcasts in that area. Below is a listing of the currently scheduled archived webcasts for the EBS CRM and Industries product family. Webcast Topic and Description Webcast Link Date and Time Upcoming: Oracle E-Business Suite - Service Oracle Service Charges - Introduction/Overview Register Dec 6, 2012 EBS CRM - Service R12: How to debug Email Center Auto Service Request Creation Failures Recording | .pdf Archived XCALC: Failed Calculations when Using OIC Recording | .pdf Archived XPOP: Failed Population When Using Oracle Incentive August 30, 2012 Recording | .pdf Archived XROLL: Failed Roll Up When Using Oracle Incentive Compensation August 16, 2012 Recording | .pdf Archived Common Problems Associated with Product Catalog in Sales Recording | .pdf Archived Oracle Incentive Compensation - Troubleshooting Payment Issues Recording | .pdf Archived R12 Renewing Service Contracts - Overview Recording | .pdf Archived 11i and R12 Oracle CRM Service Basics and Troubleshooting - an Overview Recording | .pdf Archived 11i and R12 Transaction Error Troubleshooting Overview Recording | .pdf Archived

    Read the article

  • Using Groovy Aggregate Functions in ADF BC

    - by Sireesha Pinninti
    This article explains how groovy aggregate functions(sum, count, min, max and avg) can be used in ADF Business components and demonstrates how these can be used at entity and view level Let's consider EMP and DEPT tables and an usecase to track number of employees in each department   Entity-Level To use aggregate functions at entity level, we need to have association between entities representing master and child relationship and the destination accessor name is what we are going to use in our groovy Syntax: <Accessor>.count(Groovyexpression) - Note down the destination accessor name(EMP) in the association or AccessorAttribute name in source entity - Add a transient attribute in source entity with persistent property set to false and provide the groovy expression in the syntax provided above - Finally, Add newly added attribute to view object View-Level To use aggregate functions at view level, we need to have a view link between viewobjects representing master and child relationship and the destination accessor name is what we are going to use in our groovy Syntax: <ViewLinkAccessor>.count(Groovyexpression) - Note down the destination accessor name(EmpView) in the view link or viewLinkAccessor name in source view - Add a transient attribute in view object and provide a groovy aggregate function count as a value to it in the syntax provided above Now, If you run application module tester and execute DeptView / ViewLink, you should see employee count in EmpCount field  In similar way, one can use other groovy aggregate functions sum, avg, min and max.

    Read the article

  • how to link a c++ object to a local variable in Lua

    - by MahanGM
    I'm completing my scripting interface with Lua, but recently I've stuck at some point. I have several functions for my Entitiy events like Update(). I have a function called create_entitiy() which instantiate a new entity from a given entity index: function Update() local bullet = create_entity(0, 0, "obj_bullet") end create_entity returns a table which is the properties of the created entity. Now how can I make a connection between bullet variable and my newly created object? Right now for previously added objects to the scene, I simply set a global table for each of them and then after every call to Update(), I go through registered names to find object tables and perform new changes. Like the one below: function Update() if keyboard_key_press(vk_right) then obj_player.x += 3 end I can get obj_player table because I know its name from C++, plus I can get it as a global table and simply reach for the first instance named obj_player. Is there any solution for me to make bullet variable act like this? I was thinking to get all local variables in Update() function and check for every one to see if is it table and it has an unique field attached to it like id, this way I can determine that this is an object table and do the rest of the process. By the way, is this interface going to work easier with luaBind if I implement it? Bottom line: How can I make a local variable in Lua that receives a table from create_entity function and track that local variable to capture it from C++. e. g. function Update() local bullet = create_entity(0, 0, "obj_bullet") bullet.x = 10 <== Commit a change in table end Now I want to get variable bullet from C++. And it's not just this variable, there might be a ton of these local variables with different names.

    Read the article

  • How do I fix a garbled screen on a Gateway LT3103u?

    - by paracaudex
    I've been having garbled screen problems on a Gateway LT3103u on Ubuntu for a while. I just did a fresh install of Ubuntu 11.10 and continue to have issues. I installed xubuntu-desktop in case the issues had to do with the sophisticated GNOME graphics. The problem is less bad, but it's still there. After a few minutes of using XFCE, the screen gets garbled. I assume this has something to do with the graphics card, but I don't know how to go about troubleshooting something like this. Where should I start? Update: Here is the description of the VGA card from lspci -vvv: 01:05.0 VGA compatible controller: ATI Technologies Inc RS690M [Radeon X1200 Series] (prog-if 00 [VGA controller]) Subsystem: Acer Incorporated [ALI] Device 028c Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- [disabled] Capabilities: [50] Power Management version 2 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Kernel driver in use: radeon Kernel modules: radeon Update: Setting GRUB_CMDLINE_LINUX="nomodeset" in /etc/default/grub seems to have fixed it in both Ubuntu and xubuntu-desktop. I will test it for a day or so to see if the problems recur and then post more detail with some links to an explanation. Update 2: It is possible to use this fix for Nvidia card (GTX 260) when graphics is defective after 11.10 upgrade/install? First few restarts was graphic ok, then after few restarts begins suddenly be defective and it stay so. I must returned to 11.04 because this problem and I wait for 12.04. So I hope in this fix.

    Read the article

  • Energy Firms Targetted for Sensitive Documents

    - by martin.abrahams
    Numerous multinational energy companies have been targeted by hackers who have been focusing on financial documents related to oil and gas field exploration, bidding contracts, and drilling rights, as well as proprietary industrial process documents, according to a new McAfee report. "It ... speaks to quite a sad state of our critical infrastructure security. These were not sophisticated attacks ... yet they were very successful in achieving their goals," said Dmitri Alperovitch, McAfee's vice president for threat research. Apparently, the attacks can be traced back over several years, creating a sustained security compromise that has provided access to highly sensitive information that is of huge financial value to competitors. The value of IRM as an additional layer of protection is clear. Whether your infrastructure security is in a sad state or is state of the art, breaches are always a possibility - and in any case, a lot of sensitive information is shared with third parties whose infrastructure security might not be as good as yours. IRM protects the individual information assets directly so that, even if infrastructure security is compromised, your critical information is enrypted and trackable and only accessible to authenticated, authorised, audited users. The full McAfee report is available here.

    Read the article

  • Changing State in PlayerControler from PlayerInput

    - by Jeremy Talus
    In my player input I wanna change the the "State" of my player controller but I have some trouble to do it my player input is declared like that : class ResistancePlayerInput extends PlayerInput within ResistancePlayerController config(ResistancePlayerInput); and in my playerControler I have that : class ResistancePlayerController extends GamePlayerController; var name PreviousState; DefaultProperties { CameraClass = class 'ResistanceCamera' //Telling the player controller to use your custom camera script InputClass = class'ResistanceGame.ResistancePlayerInput' DefaultFOV = 90.f //Telling the player controller what the default field of view (FOV) should be } simulated event PostBeginPlay() { Super.PostBeginPlay(); } auto state Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 200; `log("Player Walking"); } } state Running extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 350; `log("Player Running"); } } state Sprinting extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 800; `log("Player Sprinting"); } } I have tried to use PCOwner.GotoState(); and ResistancePlayerController(PCOwner).GotoState(); but won't work. I have also tried a simple GotoState, and nothing happen how can I call GotoState for the PC Class from my player input ?

    Read the article

  • Why is a small fixed vocabulary seen as an advantage to RESTful services?

    - by Matt Esch
    So, a RESTful service has a fixed set of verbs in its vocabulary. A RESTful web service takes these from the HTTP methods. There are some supposed advantages to defining a fixed vocabulary, but I don't really grasp the point. Maybe someone can explain it. Why is a fixed vocabulary as outlined by REST better than dynamically defining a vocabulary for each state? For example, object oriented programming is a popular paradigm. RPC is described to define fixed interfaces, but I don't know why people assume that RPC is limited by these contraints. We could dynamically specify the interface just as a RESTful service dynamically describes its content structure. REST is supposed to be advantageous in that it can grow without extending the vocabulary. RESTful services grow dynamically by adding more resources. What's so wrong about extending a service by dynamically specifying a per-object vocabulary? Why don't we just use the methods that are defined on our objects as the vocabulary and have our services describe to the client what these methods are and whether or not they have side effects? Essentially I get the feeling that the description of a server side resource structure is equivalent to the definition of a vocabulary, but we are then forced to use the limited vocabulary in which to interact with these resources. Does a fixed vocabulary really decouple the concerns of the client from the concerns of the server? I surely have to be concerned with some configuration of the server, this is normally resource location in RESTful services. To complain at the use of a dynamic vocabulary seems unfair because we have to dynamically reason how to understand this configuration in some way anyway. A RESTful service describes the transitions you are able to make by identifying object structure through hypermedia. I just don't understand what makes a fixed vocabulary any better than any self-describing dynamic vocabulary, which could easily work very well in an RPC-like service. Is this just a poor reasoning for the limiting vocabulary of the HTTP protocol?

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    Possible Duplicate: What Forum Software should I use? I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • What to do when you're the interviewer and you don't like your job?

    - by emcb
    I'm in a sorta strange predicament, and I could use some advice. When I was interviewing for my current job, the job description I was given seemed pretty darn nice to me. Without going into the details, the job hasn't quite turned out the way it was advertised. The company is great and takes care of its employees, but for someone who cares about the code they write and the work they do, it's a bad environment - effectively, we operate between 0.5 and 1.0 on the Joel test, and due to political issues we're not going to move beyond that any time soon. Bitter? Maybe. OK...so I'm in the market for a new job. But that's not where my dilemma is. The problem that I see coming is that I will be participating in interviewing some candidates for a position on my team, and I'm not sure what to do. I've heard through the grapevine that we have some really solid, promising, fresh-out-of-college prospects coming in to interview, and I honestly dread the thought of somebody having their first experience of engineering in this department. So I'm wondering: what should I do if/when the interviewee asks me "Do you like your job?" (no) "What kind of projects would I be working on?" (mostly static HTML/CSS changes) Anything else that would elicit a negative answer if told truthfully Do I tell the truth, to give the candidate a real picture of the job? What if this scares them away, and what if it gets blamed on me? Do I fib or lie, saying we work on exciting projects with lots of flexibility, like the pitch my boss will give when the reality is quite different? Should I feel any kind of moral responsibility to let a promising young developer know that this isn't the job for them, or should I shut up and be loyal 100% to the company? Any approaches or advice is appreciated. I hope I don't come across as overly dramatic - I honestly struggle with this question.

    Read the article

  • Oracle Flashback Technology - Webcast 9th June 2010

    - by Alex Blyth
    Hi All Here are the details for webcast on Oracle Flashback Technologies on Wednesday (9th June 2010) beginning at 1.30pm (Sydney, Australia Time). The Oracle Database architecture leverages the unique technological advances in the area of database recovery due to human errors. Oracle Flashback Technology provides a set of new features to view and rewind data back and forth in time. The Flashback features offer the capability to query historical data, perform change analysis, and perform self-service repair to recover from logical corruptions while the database is online. With Oracle Flashback Technology, you can indeed undo the past! Oracle9i introduced Flashback Query to provide a simple, powerful and completely non-disruptive mechanism for recovering from human errors. It allows users to view the state of data at a point in time in the past without requiring any structural changes to the database. Oracle Database 10g extended the Flashback Technology to provide fast and easy recovery at the database, table, row, and transaction level. Flashback Technology revolutionizes recovery by operating just on the changed data. The time it takes to recover the error is now equal to the same amount of time it took to make the mistake. Oracle 10g Flashback Technologies includes Flashback Database, Flashback Table, Flashback Drop, Flashback Versions Query, and Flashback Transaction Query. Flashback technology can just as easily be utilized for non-repair purposes, such as historical auditing with Flashback Query and undoing test changes with Flashback Database. Oracle Database 11g introduces an innovative method to manage and query long-term historical data with Flashback Data Archive. This release also provides an easy, one-step transaction backout operation, with the new Flashback Transaction capability. Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6690835Conference Key: flashbackEnrollment is required. Please click here to enroll.Please use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call) Audio details: NZ Toll Free - 0800 888 157 orAU Toll Free - 1800420354 (or +61 2 8064 0613)Meeting ID: 7914841Meeting Passcode: 09062010 Talk to you all Wednesday 9th June Alex

    Read the article

  • Precising definition of programming paradigm

    - by Kazark
    Wikipedia defines programming paradigm thus: a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words programming and paradigm could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in Clean Code (13), Consider this book a description of the Object Mentor School of Clean Code. The techniques and teachings within are the way that we practice our art. We are willing to claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed, and you will learn to write code that is clean and professional. But don't make the mistake of thinking that we are somehow "right" in any absolute sense. Thus Bob Martin is not claiming to have the correct style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. Sp how can we define programming paradigm rigorously, to distinguish it from other categories of programming styles? Fundamental is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words—in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?

    Read the article

  • invite: WEBLOGIC 12c HANDS-ON WORKSHOP IN PARIS

    - by mseika
    Oracle WebLogic 12c InnovationWorkshopApril 24-26, 2012: Colombes, France Workshop Description Oracle Fusion Middleware is the #1 application infrastructure foundation and WebLogic Server is the #1 Application Server across conventional and cloud environments. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Do you want to learn more about innovative features, capabilities and roadmap of WebLogic Server 12c? Then this technical hands-on workshop is for you. Agenda Outline WebLogic introduction WebLogic Topology WebLogic Clustering and High Availibility Coherence Troubleshooting Entreprise Messaging Development Tools & Productivity Performance Exalogic Introduction Entreprise Manager Grid Control Oracle Public Cloud Oracle Traffic Director Lab Outline WebLogic Installation & Configuration WebLogic Clustering & HA Coherence Use Cases & Monitoring WebLogic Active GridLink for RAC Integration Messaging: JMS Audience WebLogic Consultants & Architects Prerequisites Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Minimum 4GB RAM, 30GB free disk space Internet Explorer 7 or Firefox 3 or higher Download and install Oracle VM VirtualBox 4.1.8 AgendaThis workshop is 3 days. 8:30 am Sign-In and technical set up9:00 am: Workshop starts5:00 pm: Workshop ends This workshop is Free but space is limited. Register now!Register Here!

    Read the article

  • How AlphaBlend Blendstate works in XNA 4 when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • What to Return with Async CRUD methods

    - by RualStorge
    While there is a similar question focused on Java, I've been in debates with utilizing Task objects. What's the best way to handle returns on CRUD methods (and similar)? Common returns we've seen over the years are: Void (no return unless there is an exception) Boolean (True on Success, False on Failure, exception on unhandled failure) Int or GUID (Return the newly created objects Id, 0 or null on failure, exception on unhandled failure) The updated Object (exception on failure) Result Object (Object that houses the manipulated object's ID, Boolean or status field to with success or failure indicated, Exception information if there was one, etc) The concern comes into play as we've started moving over to utilizing C# 5's Async functionality, and this brought the question up of how we should handle CRUD returns large-scale. In our systems we have a little of everything in regards to what we return, we want to make these returns standardized... Now the question is what is the recommended standard? Is there even a recommended standard yet? (I realize we need to decide our standard, but typically we do so by looking at best practices, see if it makes sense for us and go from there, but here we're not finding much to work with)

    Read the article

  • Very simple OOP question

    - by Mosty Mostacho
    I was creating and discussing a class diagram with a partner of mine. To simplify things, I've modify the real domain we're working on and made up the following diagram: Basically, a company works on constructions that are quite different one from each other but are still constructions. Note I've added one field for each class but there should be many more. Now, I thought this was the way to go but my partner told me that if in the future new construction classes appear we would have to modify the Company class, which is correct. So the new proposed class diagram would be this: Now I've been wondering: Should the fact that in no place of the application will there be mixed lists of planes and bridges affect the design in any way? When we have to list only planes for a company, how are we supposed to distinguish them from the other elements in the list without checking for their class names? Related to the previous question, is it correct to assume that this type of diagram should be high-level and this is something it shouldn't matter at this stage but rather be thought and decided at implementation time? Any comment will be appreciated.

    Read the article

  • How to Change System Application Pages (Like AccessDenied.aspx, Signout.aspx etc)

    - by Jayant Sharma
    An advantage of SharePoint 2010 over SharePoint 2007 is, we can programatically change the URL of System Application Pages. For Example, It was not very easy to change the URL of AccessDenied.aspx page in SharePoint 2007 but in SharePoint 2010 we can easily change the URL with just few lines of code. For this purpose we have two methods available: GetMappedPage UpdateMappedPage: returns true if the custom application page is successfully mapped; otherwise, false. You can override following pages. Member name Description None AccessDenied Specifies AccessDenied.aspx. Confirmation Specifies Confirmation.aspx. Error Specifies Error.aspx. Login Specifies Login.aspx. RequestAccess Specifies ReqAcc.aspx. Signout Specifies SignOut.aspx. WebDeleted Specifies WebDeleted.aspx. ( http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.spcustompage.aspx ) So now Its time to implementation, using (SPSite site = new SPSite(http://testserver01)) {   //Get a reference to the web application.   SPWebApplication webApp = site.WebApplication;   webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, "/_layouts/customPages/CustomAccessDenied.aspx");   webApp.Update(); } Similarly, you can use  SPCustomPage.Confirmation, SPCustomPage.Error, SPCustomPage.Login, SPCustomPage.RequestAccess, SPCustomPage.Signout and SPCustomPage.WebDeleted to override these pages. To  reset the mapping, set the Target value to Null like webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, null);webApp.Update();One restricted to location in the /_layouts folder. When updating the mapped page, the URL has to start with “/_layouts/”. Ref: http://msdn.microsoft.com/en-us/library/gg512103.aspx#bk_spcustapp http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.updatemappedpage.aspx

    Read the article

  • What data to send when tracking clicks with Google Analytics events (and how)?

    - by user359650
    When tracking clicks on links, there are 3 items I'm interested in: link location in the page by grabbing the id of the closest parent: to see influence of location on click-through link text: to see influence of text on click-through link href attribute value: to see where people go when leaving my website The problem when using Google Analytics to track those clicks is that events only have 3 available text fields, one of which being the category, which if you use to store one of the above items will create a mess in your Event reporting because you will have as many categories as item values. Therefore if you assign a predefined value to the category (e.g. clicks), then you're left with only 2 event fields (action, label) to store 3 items (location, text, href). That in itself isn't the end of the world because you can concatenate 2 items into 1 event field, then use the reporting or the API to filter things out. Accordingly what I plan on doing is this: category: clicks action: {location_on_page} ¦ {text} label: {href} where {__} are variable values related to the clicked links With this I can easily create some reports directly via the GUI: downloads: include only events where label ends with .pdf click outs to particular domains: include only events where label contains domain And for more complex tasks I need to export the data (or use the API): influence of location on clicks: for each location in the design, count number of events that have that location in the action, then corroborate with pageviews of the corresponding pages. Whilst this looks good I'm wondering if there is a better approach, hence the following questions: Q1: Can you foresee any particular issues with this particular setup (e.g. things I won't be able to report on)? Q2: Can you think of other data that would be interesting to include in the event?

    Read the article

  • UML Diagrams of Multi-Threaded Applications

    - by PersonalNexus
    For single-threaded applications I like to use class diagrams to get an overview of the architecture of that application. This type of diagram, however, hasn’t been very helpful when trying to understand heavily multi-threaded/concurrent applications, for instance because different instances of a class "live" on different threads (meaning accessing an instance is save only from the one thread it lives on). Consequently, associations between classes don’t necessarily mean that I can call methods on those objects, but instead I have to make that call on the target object's thread. Most literature I have dug up on the topic such as Designing Concurrent, Distributed, and Real-Time Applications with UML by Hassan Gomaa had some nice ideas, such as drawing thread boundaries into object diagrams, but overall seemed a bit too academic and wordy to be really useful. I don’t want to use these diagrams as a high-level view of the problem domain, but rather as a detailed description of my classes/objects, their interactions and the limitations due to thread-boundaries I mentioned above. I would therefore like to know: What types of diagrams have you found to be most helpful in understanding multi-threaded applications? Are there any extensions to classic UML that take into account the peculiarities of multi-threaded applications, e.g. through annotations illustrating that some objects might live in a certain thread while others have no thread-affinity; some fields of an object may be read from any thread, but written to only from one; some methods are synchronous and return a result while others are asynchronous that get requests queued up and return results for instance via a callback on a different thread.

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

  • Internal Mic not working on Dell Adamo 13

    - by AFD
    So I'm using Elementary Luna Beta (based on Ubuntu 12.04 LTS) on a spare HDD after successfully using Jupiter release (based on 10.10) for many years. In Jupiter I had my internal mic work out of the box and with Skype installed from deb it was all setup without having to step foot inside the terminal. In Luna Beta the internal mic is not recognised by the OS and so also not recognised by Skype. I believe the hardware is this (from sudo lshw): *-multimedia description: Audio device product: 82801I (ICH9 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:46 memory:f8600000-f8603fff As well as this I ran cat /proc/asound/card0/codec* | grep Codec which gave me: Codec: IDT 92HD73C1X5 Codec: Intel Cantiga HDMI How do I tweak Luna to get this hardware working properly? I'm able to switch out my Luna HDD with the Jupiter HDD to help troubleshoot what the differences are between the two and why the more recent OS can't find / use the mic correctly. Thanks in advance for any help you can give.

    Read the article

  • Direct IO enhancements in OVM Server for SPARC 2.2(a.k.a LDoms2.2)

    - by user12611315
    The Direct I/O feature has been available for LDoms customers since LDoms2.0. Apart from the latest SR-IOV feature in LDoms2.2, it is worth noting a few enhancements to the Direct I/O feature. These are: Support for Metis-Q and Metis-E cards. These cards are highly requested for support and are worth mentioning because they are the only combo cards containing both FibreChannel and Ethernet in the same card. With this support, a customer can have both SAN storage and network access with just one card and one PCIe slot assigned to a logical domain. This reduces cost and helps when there are less number of slots in a given platform. The following are the part numbers for these cards. I have tried to put the platforms on which each card is supported, but this information can get quickly outdated. The accurate information can be found at the Support Document.  Card Name  Part Number  Platforms Metis-Q: StorageTek Dual 8Gb Fibre Channel Dual GbE ExpressModule HBA, QLogic SG-XPCIEFCGBE-Q8-N  SPARC T3-4, T4-4 Metis-E: StorageTek Dual 8Gb Fibre Chanel Dual GbE ExpressModule HBA, Emulex SG-XPCIEFCGBE-E8-N SPARC T3-4, T4-4  Additional cards added to the portfolio of supported cards. This is mainly Powerville based Ethernet cards, the part numbers for these cards as below:  Part Number  Description  7100477 Sun Quad Port GbE PCI Express 2.0 Low Profile Adapter, UTP  7100481 Sun Dual Port GbE PCI Express 2.0 Low Profile Adapter, MMF  7100483 Sun Quad Port GbE PCI Express 2.0 ExpressModule, UTP  7110486 Sun Quad Port GbE PCI Express 2.0 ExpressModule, MMF    Note:  Direct IO feature has a hard dependency on the Root domain(PCIe bus owner, here Primary domain). That is, rebooting the Root domain for any reason may impact the logical domains having PCIe slots assigned with Direct IO feature. So rebooting a root domain need to be carefully managed. Also apply the failure-policy settings as described in the admin guide and release notes to deal with unexpected cases.

    Read the article

  • "Siebel2FusionCRM Integration" solution by ec4u (D)

    - by Richard Lefebvre
    ec4u, a CRM System Integration leader based in Germany and Switzerland, and an historical Oracle/Siebel partner, offers a complete "Siebel2FusionCRM Integration" solution, based on tools methodology and services. ec4u Siebel2FusionCRM Integration solution's main objectives are: Integration between Siebel (on-premise) and Fusion CRM / Marketing (“in the cloud”) Accounts, Contacts and Addresses are maintained by Sales in Siebel CRM and synchronized in real-time into Fusion CRM / Marketing CDM Processing ensures clean data for marketing campaigns (validation and deduplication) Create E-Mail marketing campaigns and newsletters in Fusion The solution features: Upsert processes figure out what information needs to be updated, inserted or terminated (deleted). However, as Siebel is the data master, it is still a one-way synchronization. Handle deleted or nullified information by terminating them in Fusion CRM (set start and end date to define the validity period) Initial load and real-time synchronization use the same processes Invocations/Operations can be repeated due to no transactional support from Fusion web services Tagging sub entries in case of 1 to N mapping (Example: Telephone number is one simple field in Siebel but in Fusion you can have multiple telephone numbers in a sub table) E-Mail-Notification in case of any error (containing error message, instance number, detailed payload) Schematron Validation Interested? Looking for more details or a partnership with ec4u for a "Siebel2FusionCRM Integration" project? Contact: Gregor Bublitz, Director Expert Services ([email protected])

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >