Search Results

Search found 8344 results on 334 pages for 'count'.

Page 191/334 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • My GLSL shader isn't compiling even though it should. What should I investigate?

    - by reapz
    I'm porting an iOS game to Android. One of the shaders I'm using wouldn't compile until I reduced the number of uniform variables. Here are the uniform definitions: uniform highp mat4 ViewProjMatrix; uniform mediump vec3 LightDirWorld; uniform mediump int BoneCount; uniform highp mat4 BoneMatrixArray[8]; uniform highp mat3 BoneMatrixArrayIT[8]; uniform mediump int LightCount; uniform mediump vec3 LightPos[4]; // This used to be 12, but now 4, next lines also uniform lowp vec3 LightColour[4]; uniform mediump vec3 LightInnerOuterFalloff[4]; My issue is that the GLSL shader wouldn't compile until I reduced the count of the above arrays from 12 to 4. My understanding is that if those 3 lines were arrays of 12 then I would be using 56 vertex uniform vectors. I query the system at startup (GL_MAX_VERTEX_UNIFORM_VECTORS) and it says that 128 are available. Why wouldn't it compile with 56? I'm having issues on the Kindle Fire.

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • Converting a DrawModel() using BasicEffect to one using Effect

    - by Fibericon
    Take this DrawModel() provided by MSDN: private void DrawModel(Model m) { Matrix[] transforms = new Matrix[m.Bones.Count]; float aspectRatio = graphics.GraphicsDevice.Viewport.Width / graphics.GraphicsDevice.Viewport.Height; m.CopyAbsoluteBoneTransformsTo(transforms); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); Matrix view = Matrix.CreateLookAt(new Vector3(0.0f, 50.0f, Zoom), Vector3.Zero, Vector3.Up); foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = view; effect.Projection = projection; effect.World = gameWorldRotation * transforms[mesh.ParentBone.Index] * Matrix.CreateTranslation(Position); } mesh.Draw(); } } How would I apply a custom effect to a model with that? Effect doesn't have View, Projection, or World members. This is what they recommend replacing the foreach loop with: foreach (ModelMesh mesh in terrain.Meshes) { foreach (Effect effect in mesh.Effects) { mesh.Draw(); } } Of course, that doesn't really work. What else needs to be done?

    Read the article

  • XNA model drawing problem

    - by user1990950
    When using this code: public static void DrawModel(Model model, Vector3 position, Vector3 offset, float xRotation, float yRotation, float zRotation, float allrot, float xScale, float yScale, float zScale) { position.Y *= -1; offset.Y *= -1; Matrix worldMatrix = ((Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation))) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation))) * (Matrix.CreateTranslation(offset) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot))) * Matrix.CreateScale(xScale, yScale, zScale); worldMatrix *= Matrix.CreateTranslation(position) * theCamera.GetTransformation() * Matrix.CreateTranslation(new Vector3(-(graphics.GraphicsDevice.Viewport.Width / 2), graphics.GraphicsDevice.Viewport.Height / 2, 0)); foreach (ModelMesh mesh in model.Meshes) { for (int i = 0; i < mesh.Effects.Count; i++) { ((BasicEffect)mesh.Effects[i]).EnableDefaultLighting(); ((BasicEffect)mesh.Effects[i]).World = worldMatrix; ((BasicEffect)mesh.Effects[i]).View = viewMatrix; ((BasicEffect)mesh.Effects[i]).Projection = projectionMatrix; } mesh.Draw(); } } The model rotates and then scales. It should scale and then rotate, but whenever I try to change it, it won't work.

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Why is purchasing Microsoft licences such a daunting task? [closed]

    - by John Nevermore
    I've spent 2 frustrating days jumping through hoops and browsing through different local e-shops for VS (Visual Studio) 2010 Pro. And WHS (Windows Home Server) FPP 2011 licenses. I found jack .. - or to be more precise, the closest I found in my country was WHS OEM 2011 licenses after multiple emails sent to individuals found on Microsoft partners page. Question being, why is it so difficult to get your hands on Microsoft licenses as an individual? Sure, you can get the latest end user operating systems from most shops, but when it comes to development tools or server software you are left dry. And companies that do sell licenses most of the time don't even put up pricing or a self service environment for buying the licenses, you need to have an hawk's eye for that shiny little Microsoft partner logo and spam through bunch of emails not knowing, if you can count on them to get the license or not. Sure, i could whip out my credit card and buy the VS 2010 license on the online Microsoft Shop. Well whippideegoddamndoo, they sell that, but they don't sell WHS 11 licenses. Why does a company make it so hard to buy their products? Let's not even talk about the licensing itself being a pain.

    Read the article

  • Flow Fields in Dynamics NAV

    - by T
    I don’t know the exact business reason but someone asked how to identify all sales shipping orders with any negative quantities on them.  They needed to print these separately from the shipments that only have all positive quantities. Here is one way to solve this problem In NAV, open Sales Shipment Header Add a field of type Boolean (The field type should match the value you expect to return from the flow field method.  In this case we will use an exist so we expect a bool but we could have used a Sum, Average, Min, Max, Count, or a Lookup to return a value). Define it as a flow field and click on the ellipse next to CalcFormula Now [Has Negative Qty] is false if no negatives exist And [Has Negative Qty] and true if a negative exist So it is ready to be used as a filter on the report. Don’t forget that if you are using it in code, you may need to CalcFields. Hope that helps.  If you really can’t afford the field in your header, you can use code to check the lines for a negative value each time and use a skip or break function to skip that header record but it seems expensive to check them all if you only want a few to print. Please let me know if you think of a better solution.

    Read the article

  • Dependencies not met on 12.04?

    - by Mochan
    Now I'm very aware that there are many questions out there that are quite similar to what I'm experiencing, but I have looked through many and I have not found a suitable answer. You are welcome to suggest questions that are similar, but I doubt that it will help. Getting on to the issue at hand, whenever I do anything that involves installation, whether it be codecs for videos, new programs or whatever the latter, I always get the 'Dependencies not met' error. In addition, I also get this notification in the panel: When clicked, the menu says this: "An error occurred. Please run Package Manager from the right-click menu or run apt-get in a terminal to see what is wrong. The error message was: ' Error: Broken Count 0'. This usually means your installed packages have unmet dependencies." It gives me three items to click: Show Updates Install all updates Check for Updates And then finally: Show Notifications (with a tick) Preferences When I try 'Install all Updates' (also Check Updates Install) it says this: and also this: As well as 'Ubuntu has experienced an internal error' and 'Did this error occur when moving from one version of Ubuntu to another?' (I clicked NO, because it didn't). So I took it's advice and ran sudo apt-get install -f This is what results: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libapt-pkg4.12:i386 The following packages will be upgraded: libapt-pkg4.12:i386 1 upgraded, 0 newly installed, 0 to remove and 87 not upgraded. 1 not fully installed or removed. Need to get 0 B/941 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y E: Internal Error, No file name for libapt-pkg4.12 When running sudo apt-get update it's all fine, but running sudo apt-get install -f still results in the same thing. I really have no idea what to do... can anyone help me?

    Read the article

  • How to debug lack of sound in Asus EEE PC

    - by Kalmar
    I have an Asus EEE PC 1225B with fresh Lubuntu 12.04. And no sound. It doesn't seem to be some common problem, so I have to make some research what's up. I tried running alsamixer, so I know I have Realtek ALC269VB with nothing muted unexpectedly. What can I do next to identify and solve the problem? Additional info: alsamixer shows two cards: HD-Audio Generic and HDA ATI-SB (Realtek ALC269VB); the first one is muted. ~$ aplay ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave aplay: main:682: blad otwierania audio: Nie ma takiego pliku ani katalogu The Polish part can be translated as "error opening audio: There is no such file or directory". ~$ sudo lspci -v | grep -A7 -i "audio" 00:01.1 Audio device: Advanced Micro Devices [AMD] nee ATI Wrestler HDMI Audio [Radeon HD 6250/6310] Subsystem: ASUSTeK Computer Inc. Device 103b Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at feb44000 (32-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 3 Capabilities: [58] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> -- 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) (rev 40) Subsystem: ASUSTeK Computer Inc. Device 103b Flags: bus master, slow devsel, latency 32, IRQ 16 Memory at feb40000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel

    Read the article

  • Breaking up classes and methods into smaller units

    - by micahhoover
    During code reviews a couple devs have recommended I break up my methods into smaller methods. Their justification was (1) increased readability and (2) the back trace that comes back from production showing the method name is more specific to the line of code that failed. There may have also been some colorful words about functional programming. Additionally I think I may have failed an interview a while back because I didn't give an acceptable answer about when to break things up. My inclination is that when I see a bunch of methods in a class or across a bunch of files, it isn't clear to me how they flow together, and how many times each one gets called. I don't really have a good feel for the linearity of it as quickly just by eye-balling it. The other thing is a lot of people seem to place a premium of organization over content (e.g. 'Look at how organized my sock drawer is!' Me: 'Overall, I think I can get to my socks faster if you count the time it took to organize them'). Our business requirements are not very stable. I'm afraid that if the classes/methods are very granular it will take longer to refactor to requirement changes. I'm not sure how much of a factor this should be. Anyway, computer science is part art / part science, but I'm not sure how much this applies to this issue.

    Read the article

  • Ubuntu Boot Slow - Late Drive Access

    - by Mo2
    So I just installed Ubuntu on a brand new Samsung 840 Pro SSD and have noticed that it is slower than expected. The LED access light doesn't start blinking until after 20 seconds or so have passed, after which the boot up seems to actually start. Does anyone have any ideas as to why this is happening? I have Windows 7 installed on a separate SSD and use Windows Loader as the main loader. Windows Loader points to GRUB2, from which I then start Ubuntu up. The 20 seconds count that I mentioned earlier starts AFTER I select Ubuntu 14.04.1 from the GRUB2 menu. Any help is appreciated. EDIT: I forgot to mention that I have only one partition for / and /home. I intend on using a shared NTFS drive for my documents and other personal files, so I had no need for the /home. I did not make a /swap since I have 12 GB of RAM. I also did not make a /boot since I was told that it's not really necessary in my case.

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • PASS: Total Registrations

    - by Bill Graziano
    At the Summit you’ll see PASS announce the total attendance and the “total registrations”.  The total registrations is the sum of the conference attendees and the pre-conference registrations.  A single person can be counted three times (conference plus two pre-cons) in the total registration count. When I was doing marketing for the Summit this drove me nuts.  I couldn’t figure out why anyone would use total registrations.  However, when I tried to stop reporting this number I got lots of pushback.  Apparently this is how conferences compare themselves to each other.  Vendors, sponsors and Microsoft all wanted to know our total registration number.  I was even asked why we weren’t doing more “things” that people could register for so that our number would be even larger.  This drove me nuts. I understand that many of you are very detail oriented.  I just want to make sure you understand what numbers you’re seeing when we include them in the keynote at the Summit.

    Read the article

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • Does Google Analytics exclude Campaign traffic from Facebook in the Social reports?

    - by user1612223
    For a while we have used campaign tags when putting posts on Facebook so that we can run campaign reports in Google analytics on those links. However it appears that traffic from those links are being excluded in Google's Social reports. For example between 7/20 and 8/19 I'm seeing 123 Visits where Facebook is the source in my Campaigns report, but only 29 Visits where Facebook is the source in my Social Sources report. Main questions: Does Google exclude campaign traffic from it's social reports? If it does, is there any way to reconcile that so that the traffic shows up in both reports? If it doesn't, what could be causing the vast discrepancy? One observer noted that we are setting the Medium to "Post" when passing the campaign parameters, and that Google may only allow "Referral" traffic in it's social reports (Just speculation). In that case we could potentially change the Medium to "Referral", but that would undermine some of our strategy in being able to set different mediums. I have also considered that maybe the campaign traffic came to the site several times, and the social report may count the same user as less visits, however over 70% of the Facebook campaign traffic is new traffic, so at a minimum there would need to be over 85 Visits on the Social side for that argument to be valid. I've done several searches for any information on this topic, and haven't run across much of anything. I did post the same question on Google's Product Forum and have not gotten a response. The title of that question was 'Facebook Campaign Traffic Not Showing in Social Reports'. The inability to pass campaign data on Facebook posts would make evaluating the performance of those specific posts very difficult, so I'm hoping there is a solution to this.

    Read the article

  • Mercurial says "nothing changed", but it did. Sometimes my software is too clever.

    - by user12608033
    It seems I have found a "bug" in Mercurial. It takes a shortcut when checking for differences in tracked files. If the file's size and modification time are unchanged, it assumes its contents are unchanged: $ hg init . $ cp -p .sccs2hg/2005-06-05_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg add nicstat.c $ hg commit -m "added nicstat.c" $ cp -p .sccs2hg/2005-07-02_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg diff $ hg commit nothing changed $ touch nicstat.c $ hg diff diff -r b49cf59d431d nicstat.c --- a/nicstat.c Fri Aug 24 11:21:27 2012 -0700 +++ b/nicstat.c Fri Aug 24 11:22:50 2012 -0700 @@ -2,7 +2,7 @@ * nicstat - print network traffic, Kb/s read and written. Solaris 8+. * "netstat -i" only gives a packet count, this program gives Kbytes. * - * 05-Jun-2005, ver 0.81 (check for new versions, http://www.brendangregg.com) + * 02-Jul-2005, ver 0.90 (check for new versions, http://www.brendangregg.com) * [...] Now, before you agree or disagree with me on whether this is a bug, I will also say that I believe it is a feature. Yes, I feel it is an acceptable shortcut because in "real" situations an edit to a file will change the modification time by at least one second (the resolution that hg diff or hg commit is looking for). The benefit of the shortcut is greatly improved performance of operations like "hg diff" and "hg status", particularly where your repository contains a lot of files. Why did I have no change in modification time? Well, my source file was generated by a script that I have written to convert SCCS change history to Mercurial commits. If my script can generate two revisions of a file within a second, and the files are the same size, then I run afoul of this shortcut. Solution - I will just change my script to apply the modification time from the SCCS history to the file prior to commit. A "touch -t " will do that easily.

    Read the article

  • Ubuntu on Samsung NP700Z5B - no Grub

    - by copolii
    I just bought a Samsung NP700Z5B laptop. Gorgeous machine and great performance! I do 2 things when I get a new laptop: Format the HD and install Winblows from a CD to ditch the bloatware Install some variant of Linux on it (lately Ubuntu) Step 1 worked fine (until earlier today), but I haven't been able to install Ubuntu on it for the past 3 days! I've tried Mint12, Ubuntu 12.04, Ubuntu 11.10, Ubuntu 11.04 and Ubuntu 10.04. The live CD and the installations all run fine and report no problems, but when I reboot grub is nowhere to be found! The system goes directly to Winblows! I've tried booting from the liveCD and re-installing grub via the chroot and purge & reinstall methods (https://help.ubuntu.com/community/Grub2) and neither makes a difference. I've also tried copying the boot sector: dd if=/dev/sda of=linux.bin bs=512 count=1 and putting it on c: then setting bcdedit to add the entry to the Windows bootloader with no results. Earlier today I decided to try and set my boot partition as an EFI boot partition ... bad choice, now I don't even have the Winblows boot loader. I've officially ran out of ideas. Tried calling Samsung, but they're closed (they'd probably say something stupid along the lines of "Samsung recommends Windows 7" ... I've had Dell say that to me). Any help would be greatly appreciated. Update 1 Tried re-installing 12.04 and now I get the screen continously turning off and back on, but still no sign of booting ... it has been doing it for 15 mins so far (I set the boot partition type to ext2 instead of ext4) Update 2 Well ... this just gets better and better. I inserted the installation USB key to reboot it and the flickering stopped for about a minute (remained on) then it started turning off and on again

    Read the article

  • Is there any way to send a column value from outer query to inner sub query? [closed]

    - by chetan
    'Discussions' table schema title description desid replyto upvote downvote views browser used a1 none 1 1 12 - bad topic b2 a1 2 3 14 sql database a3 none 4 5 34 - crome b4 a3 3 4 12 The above table has two types of content types Main Topics and Comments. Unique content identifier 'desid' used to identify that its a main topic or a comment. 'desid' starts with 'a' for Main Topic and for comment 'desid' starts with 'b'. For comment 'replyto' is the 'desid' of main topic to which this comment is associated. I like to find out the list of the top main topics that are arranged on the basis of (upvote+downvote+visits+number of comments to it) addition. The following query gives top topics list in order of (upvote+downvote+visits) select * with highest number of upvote+downvote+views by query "select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by (upvote+downvote+visited) desc For (comments+upvote+downvote+views ) I tried select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by ((select count(*) from [DB_user1212].[dbo].[discussions] where replyto = desid )+upvote+downvote+visited) desc but it didn't work because its not possible to send desid from outer query to inner subquery. How to solve this? Please note that I want solution in query language only.

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >