Search Results

Search found 16885 results on 676 pages for 'custom headers'.

Page 549/676 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • iwlwifi on lenovo z570 disabled by hardware switch

    - by Kevin Gallagher
    It was working fine with windows 7. The hardware switch is not disabled. I've toggled it back and forth dozens of times. The wifi light never turns on and it always lists as hardware disabled. I have the latest updates installed. I've been searching for solutions, but none of them seem to work for me. I've tried removing acer-wmi. I've tried setting 11n_disable=1. I've tried resetting the bios. I've tried using rfkill to unblock (only removes soft block). I've rebooted dozens of times. The wifi light turns off as soon as grub loads. Edit: I have a usb edimax wireless nic. It shows hardware disabled as well (although rfkill lists as unblocked). If I unload iwlwifi the usb nic works fine. uname -a `Linux xxx-Ideapad-Z570 3.2.0-55-generic #85-Ubuntu SMP Wed Oct 2 12:29:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linu`x rfkill list 19: phy18: Wireless LAN Soft blocked: no Hard blocked: yes dmesg [43463.022996] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [43463.023002] Copyright(c) 2003-2011 Intel Corporation [43463.023107] iwlwifi 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [43463.023190] iwlwifi 0000:03:00.0: setting latency timer to 64 [43463.023253] iwlwifi 0000:03:00.0: pci_resource_len = 0x00002000 [43463.023257] iwlwifi 0000:03:00.0: pci_resource_base = ffffc900057c8000 [43463.023261] iwlwifi 0000:03:00.0: HW Revision ID = 0x0 [43463.023797] iwlwifi 0000:03:00.0: irq 43 for MSI/MSI-X [43463.024013] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [43463.024250] iwlwifi 0000:03:00.0: L1 Enabled; Disabling L0S [43463.045496] iwlwifi 0000:03:00.0: device EEPROM VER=0x15d, CALIB=0x6 [43463.045501] iwlwifi 0000:03:00.0: Device SKU: 0X50 [43463.045504] iwlwifi 0000:03:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [43463.045542] iwlwifi 0000:03:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [43463.045744] iwlwifi 0000:03:00.0: RF_KILL bit toggled to disable radio. [43463.047652] iwlwifi 0000:03:00.0: loaded firmware version 39.31.5.1 build 35138 [43463.047823] Registered led device: phy18-led [43463.047895] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [43463.048037] ieee80211 phy18: Selected rate control algorithm 'iwl-agn-rs' [43463.055533] ADDRCONF(NETDEV_UP): wlan0: link is not ready nm-tool State: connected (global) - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: unavailable Default: no HW Address: 74:E5:0B:4A:9F:C2 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points lshw -C network *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 74:e5:0b:4a:9f:c2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-55-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:43 memory:f1500000-f1501fff lspci 03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak]

    Read the article

  • C# Dev - I've tried Lisps, but I don't get it.

    - by Jonathan Mitchem
    After a few months of learning about and playing with lisps, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm missing something really big. The strengths of a Lisp (per my research): Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. Implicit support for functional programming - C# with LINQ extension methods: mapcar = .Select( lambda ) mapcan = .Select( lambda ).Aggregate( (a,b) = a.Union(b) ) car/first = .First() cdr/rest = .Skip(1) .... etc. Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: "(lambda (x) ( body ))" versus "x = ( body )" "#(" with "%", "%1", "%2" is nice in Clojure Method dispatch separated from the objects - C# has this through extension methods Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours Code is Data (and Macros) - Maybe I haven't "gotten" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the "language", but I'm not sure that's a strength DSLs - Can only do it through function composition... but it works Untyped "exploratory" programming - for structs/classes, C#'s autoproperties and "object" work quite well, and you can easily escalate into stronger typing as you go along Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in a Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): Namespaces. Even with static methods, I like to tie them to a "class" to categorize their context (Clojure seems to have this, CL doesn't seem to.) Great compile and design-time support the type system allows me to determine "correctness" of the datastructures I pass around anything misspelled is underlined realtime; I don't have to wait until runtime to know code improvements (such as using an FP approach instead of an imperative one) are autosuggested GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisps and I gave it a chance. But is there anything I can do in a Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?

    Read the article

  • Application Integration Future &ndash; BizTalk Server &amp; Windows Azure (TechEd 2012)

    - by SURESH GIRIRAJAN
    I am really excited to see lot of new news around BizTalk in TechEd 2012. I was recently watching the session presented by Sriram and Rajesh on “Application Integration Futures: The Road Map and What's Next on Windows Azure”. It was great session and lot of interesting stuff about the feature updates for BizTalk and Azure integration. I have highlighted them below, definitely customers who haven’t started using Microsoft BizTalk ESB Toolkit should start using them which is going to be part of the core BizTalk product in future release, which is cool… BizTalk Server feature enhancements Manageability: ESB Tool Kit is going to be part of the core BizTalk product and Setup. Visualize BizTalk artifact dependencies in BizTalk administration console. HIS administration using configuration files. Performance: Improvements in ordered send port scenarios Improved performance in dynamic send ports and ESB, also to configure BizTalk host handler for dynamic send ports. Right now it runs under default host, which does not enable to scale. MLLP adapter enhancements and DB2 client transaction load balancing / client bulk insert. Platform Support: Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. B2B enhancements to support latest industry standards natively. Connectivity Improvements: Consume REST service directly in BizTalk SharePoint integration made easier. Improvements to SMTP adapter, to add macros for sending same email with different content to different parties. Connectivity to Azure Service Bus relay, queues and topics. DB2 client connectivity to SQL Server and SQL Server connectivity to Informix. CICS Http client connectivity to Windows. Azure: Use Azure as IaaS/PaaS for BizTalk environment. Use Azure to provision BizTalk environment for test environment / development. Later move to On-premises or build a Hybrid cloud approach. Eliminate HW procurement for BizTalk environment for testing / demos / development etc. Enable to create BizTalk farm easily and remove/add more servers as needed. EAI Service: EAI Bridge Protocol transformation Message Transformation Running custom code Message Enrichment Hybrid Connectivity LOB Applications On-premises Application On-premises Connectivity to Applications in the cloud Queues/ Topics Ftp Devices Web Services Bridge can be customized based on the service needs to provide different capabilities needed as part of the bridge. Look at the sample for EDI bridge for EDI service sample. Also with Tracking enabled through the portal. http://msdn.microsoft.com/en-us/library/windowsazure/hh674490 Adapters: Service Bus Messaging adapter - New adapter added. WebHttp adapter - For REST services. NetTcpRelay adapter - New adapter added. I will start posting more and once I start playing with this…

    Read the article

  • Workflow 4.5 is Awesome, cant wait for 5.0!

    - by JoshReuben
    About 2 years ago I wrote a blog post describing what I would like to see in Workflow vnext: http://geekswithblogs.net/JoshReuben/archive/2010/08/25/workflow-4.0---not-there-yet.aspx At the time WF 4.0 was a little rough around the edges – the State Machine was on codeplex and people were simulating state machines with Flowcharts. Last year I built a near- realtime machine management system using WF 4.0.1 – its managing the internal operations of this device: http://landanano.com/products/commercial   Well WF 4.5 has come a long way – many of my gripes have been addressed: C# expressions - no more VB 'AndAlso' clauses state machine awesomeness - can query current state many designer improvements - Document Outline is so much more succinct than Designer! Separate WCF Service Contract interfaces and ability to generate activities from contract operations ability to rehydrate to updated flow definitions via DynamicUpdateMap and WorkflowIdentity you can read about the new features here: http://msdn.microsoft.com/en-us/library/hh305677(VS.110).aspx   2013 could be the year of Workflow evangelism for .NET, as it comes together as the DSL language. Eg on Azure it could be used to graphically orchestrate between WebRoles, WorkerRoles and AppFabric Queues and the ServiceBus – that would be grand.   Here’s a list of things I’d like to see in Workflow 5.0: Stronger Parallelism support for true multithreaded workflows . A Workflow executes on a single thread – wouldn’t it be great if we had the ability to model TPL DataFlow? Parallel is not really parallel, just allows AsyncCodeActivity.     support for recursion an ExpressionTree activity with an editor design surface a math activity pack return of application level protocol (3.51 WF services) – automatically expose a state machine as a WCF service with bookmark Receive activities generated from OperationContract automatically placed in state transition triggers. A new HTML5 ActivityDesigner control – support with different CSS3  skinnable hooks,  remote connectivity (had to roll my own) A data flow view – crucial to understanding the big picture Ability to refactor a Sequence to custom activity in a separate .xaml file – like Expression Blend does for UserControl state machine global error handling - if all states goto an error state, you quickly get visual spagetti. Now you could nest a state machine, but what if you want an application level protocol whereby each state exposes certain WCF ops. DSL RAD editing - Make the Document Outline into a DSL editor for adding activities  – For WF to really succeed as a higher level of abstraction, It needs to be more productive than raw coding - drag & drop on the designer is currently too slow compared to just typing code. Extensible Wizard API - for pluggable WF editor experience other execution models beyond Sequence, Flowchart & StateMachine: SSIS, Behavior Trees,  Wolfram Model tool – surprise us! improvements to Designer debugging API - SourceLocation is tied to XAML file line number and char position, and ModelService access seems convoluted - why not leverage WPF LogicalTreeHelper / VisualTreeHelper ? Workflow Team , keep on rocking!

    Read the article

  • Java Embedded Releases

    - by Tori Wieldt
    Oracle today announced a new product in its Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime Optimized for resource-constrained, connected, embedded systems.  Also, Oracle is releasing Oracle Java Wireless Client 3.2, Oracle Java ME Software Development Kit (SDK) 3.2. Oracle also announced Oracle Java Embedded Suite 7.0 for larger embedded devices, providing a middleware stack for embedded systems. Small is the new big! Introducing Oracle Java ME Embedded 3.2  Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. These include: On-the-fly application downloads and updates Remote operation, often in challenging environments Ability to add new capabilities without impacting the existing functions Support for hardware with as little as 130 kB RAM and 350 kB ROM Oracle Java Wireless Client 3.2 Oracle Java Wireless Client 3.2 is built around an optimized Java ME implementation that delivers a feature-rich application environment for mass-market mobile devices. This new release: Leverages standard JSRs, Oracle optimizations/APIs and a flexible porting layer for device specific customizations, which are tuned to device/chipset requirements Supports advanced tooling functions, such as memory and network monitoring and on-device tooling Offers new support for dual SIM functionality, which is highly useful for mass-market devices supported by multiple carriers with multiple phone connections Oracle Java ME SDK 3.2 Oracle Java ME SDK 3.2 provides a complete development environment for both Oracle Java ME Embedded 3.2 and Oracle Java Wireless Client 3.2. Available for download from OTN, The latest version includes: Small embedded device support In-field and remote administration and debugging Java ME SDK plug-ins for Eclipse and the NetBeans Integrated Development Environment (IDE), enabling more application development environments for Java ME developers. A new device skin creator that developers can use to generate their custom device skins for testing their applications. Oracle Embedded Suite 7.0 The Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to and from other embedded devices.The Oracle Java Embedded Suite is a complete device-to-data center solution subset for embedded systems.  See Java Me and Java Embedded in Action Java ME and Java Embedded technologies will be showcased for developers at JavaOne 2012 in over 60 conference sessions and BOFs, as well as in the JavaOne Exhibition Hall. For business decision makers, the new event Java Embedded @ JavaOne you learn more about Java Embedded technologies and solutions.

    Read the article

  • Where would a spam bot be located?

    - by Tim
    I have a hosted website using a free hosting service, I received an email this afternoon saying that I have been suspended because my account has been compromised. Basically, someone is using my email account to mass send spam. I've changed all the passwords and everything but when my Gmail pulls the emails from the host it's still downloading loads of spam messages that show like this: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after end of data: host 198.91.80.251 [198.91.80.251]: 554 5.6.0 id=23634-03 - Rejected by MTA on relaying, from MTA([127.0.0.1]:10030): 554 Error: This email address has lost rights to send email from the system ------ This is a copy of the message, including all the headers. ------ Return-path: <[email protected]> Received: from keenesystems.com ([66.135.33.211]:2370 helo=server211) by absolut.x10hosting.com with esmtpsa (TLSv1:RC4-MD5:128) (Exim 4.77) (envelope-from <[email protected]>) id 1TGwSW-002hHe-Lc for [email protected]; Wed, 26 Sep 2012 13:35:44 -0500 MIME-Version: 1.0 Date: Wed, 26 Sep 2012 13:35:43 -0500 X-Priority: 3 (Normal) X-Mailer: Ximian Evolution 3.9.9 (8.5.3-6) Subject: New staff members wanted at Auction It Online From: [email protected] Reply-To: [email protected] To: "Nadia Monti" <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Message-ID: <OUTLOOK-IDM-9aed7054-6a3e-e1a4-1d5c-3e73377652a6@server211> Date : 26 September 2012=0ATime : 13:35=0ASender : Dennise Halcomb Head = Office Manager of RJ Auction Drop-Off Int.=0A=0ANice to meet you Nadia M= onti=0A=0ARJ ADO Ltd., a USA based company, offers a significant amount = of goods worldwide for our customers on eBay and other auction venues. = Our company's main target is to provide a suitable and cost-effective se= rvice for any person, company or fundraising company. The main purpose o= f the administrative assistant / sales support representative is to cont= ribute to the sales force and add convenience to our cost-effective serv= ice dedicated to individuals, businesses, and organizations worldwide. O= ur HR department obtained your resume from one of the various job-orient= ed websites just to offer you this post.=0A=0AWorking Schedule: This is = a part time and home-based offer. You won't need to spend more than 3 ho= urs each day. Your =0Aschedule will be flexible.=0A=0ASalary: At the end= of the trial period (it lasts for 1 month) you will be paid 1,800 EUR. = With the average volume of clients your overall income will raise up to = 3,000 EUR per month. After the trial period is over your base salary wil= l grow up to 2,500 EUR per month, so you will earn 5% commission from th= e transactions completed.=0A=0AWhere?: Italy Wide. As it is a stay at ho= me position all the communication will be carried out via email and via = phone.=0A=0ARequirements: Access to the internet during the workday and = basic microsoft office skills are needed. Basic knowledge of English is = required (most of the contacts will be in English).=0A=0ACosts and Fees:= There are NO costs at any time for our employees. All fees related to t= his position are covered by the RJ ADO Co. Ltd..=0A=0AFurther Hiring Pro= cess: If you are interested in position we offer, please reply to this e= mail and send us the copy of your resume for verification.=0A=0AAfter re= viewing all of the received applications we will reply to successful app= licants only. Then we'll offer to these successful applicants a position= within our firm on a trial period basis for one month beginning from th= e date you sign a trial agreement. During this trial period you will rec= eive full guidance and support. Employees on a one monthly trial period = are evaluated at least one week prior to the end of their trial. During = the trial, your supervisor can recommend termination. At the end of the = trial period, the supervisor can offer continued employment, extension o= f trial period, or termination. After the trial period you may ask for m= ore hours or continue full-time.=0A=0AIf you are interested in this posi= tion, just reply to this email and send any questions you have and the c= opy of your resume for verification.=0A=0AThank You,=0AHR-Manager of RJ = ADO Co. Ltd.=0A=0APermission Settings=0AYou have been referred to RJ Auc= tion Drop-Off If you feel you received this email in error or do not wis= h to receive future messages, please reply to this message with "remove"= in the subject field. We will immediately update our database according= ly. =0AWe apologize for any inconvenience caused.=0A=0ARJ Auction Drop-O= ff Co. Ltd. I'm not aware of how this has happened. I'm not sure how anyone could have got hold of my password. It's a simple wordpress install, at some point recently my host went down and there was a fresh install of wordpress with default admin accounts, I have a feeling it could be something to do with this. My question is, even though I've changed all my passwords it's all still happening, is there annywhere in paticular this script would be stored on my host. I really can't deal with having my hosting account suspended and my email account sending all this spam.

    Read the article

  • cocos2d-x simple shader usage [on hold]

    - by Narek
    I want to obtain color ramp effect from this tutorial: http://www.raywenderlich.com/10862/how-to-create-cool-effects-with-custom-shaders-in-opengl-es-2-0-and-cocos2d-2-x Here is my code in cocos2d-x 3: bool HelloWorld::init() { ////////////////////////////// // 1. super init first if ( !Layer::init() ) { return false; } Vec2 origin = Director::getInstance()->getVisibleOrigin(); sprite = Sprite::create("HelloWorld.png"); sprite->setAnchorPoint(Vec2(0, 0)); sprite->setRotation(3); sprite->setPosition(origin); addChild(sprite); std::string str = FileUtils::getInstance()->getStringFromFile("CSEColorRamp.fsh"); const GLchar * fragmentSource = str.c_str(); GLProgram* p = GLProgram::createWithByteArrays(ccPositionTextureA8Color_vert, fragmentSource); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORD); p->link(); p->updateUniforms(); sprite->setGLProgram(p); // 3 colorRampUniformLocation = glGetUniformLocation(sprite->getGLProgram()->getProgram(), "u_colorRampTexture"); glUniform1i(colorRampUniformLocation, 1); // 4 colorRampTexture = Director::getInstance()->getTextureCache()->addImage("colorRamp.png"); colorRampTexture->setAliasTexParameters(); // 5 sprite->getGLProgram()->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, colorRampTexture->getName()); glActiveTexture(GL_TEXTURE0); return true; } And here is the fragment shader as it is in the tutorial: #ifdef GL_ES precision mediump float; #endif // 1 varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_colorRampTexture; void main() { // 2 vec3 normalColor = texture2D(u_texture, v_texCoord).rgb; // 3 float rampedR = texture2D(u_colorRampTexture, vec2(normalColor.r, 0)).r; float rampedG = texture2D(u_colorRampTexture, vec2(normalColor.g, 0)).g; float rampedB = texture2D(u_colorRampTexture, vec2(normalColor.b, 0)).b; // 4 gl_FragColor = vec4(rampedR, rampedG, rampedB, 1); } As a result I get a black screen with 2 draw calls. What is wrong? Do I miss something?

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • Lookup Viewer

    - by Geertjan
    The Maven integrated view that I showed yesterday I was able to create because I happened to know that an implementation of SubprojectProvider and LogicalViewProvider are in the Lookup of Maven projects. With that knowledge, I was able to use and even delegate to those implementations. But what if you don't know that those implementations are in the Lookup of the Project object? In the case of the Maven Project implementation, you could look in the source code of the Maven Project implementation, at the "getLookup" method. However, any other module could be putting its own objects into that Lookup, dynamically, i.e., at runtime. So there's no way of knowing what's in the Lookup of any Project object or any other object with a Lookup. But now imagine that you have a Lookup Viewer, as a tool during development, which you would exclude when distributing the application. Whenever new objects are found in the Lookup, the viewer displays them. You could install the Lookup Viewer into NetBeans IDE, or any other NetBeans Platform application, and then get a quick impression of what's actually in the Lookup when you select a different item in the application during development. Here it is (though I vaguely remember someone else writing something similar): Above, a Maven Project is selected. The Lookup Window shows that, among many other classes, an implementation of SubprojectProvider and LogicalViewProvider are found in the Lookup when the Maven Project is selected. If an item in the Lookup Window has its own Lookup, the content of that Lookup is displayed as child nodes of the Lookup, etc, i.e., you can explore all the way down the Lookup of each item found within objects found within the current selection. (What's especially fun is seeing the SaveCookieImpl being added and removed from the Lookup Window when you make/save a change in a document.) Another example is below, showing the Lookup Window installed in a custom application created during a course at MIT in Boston: A small trick I had to apply is that I always show the previous Lookup, since the current Lookup, when you select one of the Nodes in the Lookup Window, would be the Lookup of the Lookup Window itself! If anyone is interested in this, I can publish the NetBeans module providing the above window to the NetBeans update center. 

    Read the article

  • Building SANE from git-source produce backend missmatch on 12.04 even if built locally

    - by deinonychusaur
    It seems to me that with Ubuntu Precise Pangolin it is all but easy to do a proper install of SANE from source (git-repo). I've found other scanning issues trying to find an answer to this, where the output people posted seems to indicate they suffer the same issue (unknowingly). If I run on a fresh install of Ubuntu 12.04 with compiled SANE source from the git I get: $ scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.22 (I basically followed the instructions on http://ubuntuportal.com/2012/02/how-to-get-an-canon-canoscan-lide-100-scanner-to-work-in-ubuntu-11-10linux-mint-12.html since I didn't find any other information making sure that sane was not installed prior to installation.) My primary interest is the epson2-backend. In 1.0.22 it offers the wrong TPU settings for Epson V700 (TPU2-mode wasn't supported in 1.0.22, and the scanner is useless to me if I don't have the TPU2-support). Since if I ask it to enter transparency mode, it shows 1.0.22 behaviour, it implies that the epson2-backend comes from 1.0.22 and not 1.0.24 even though I just built it. If I install SANE with prefix to a local folder and run that version of scanimage it still produces the mismatch. However, on another computer where I installed a custom 1.0.22 build of SANE prior to upgrading to Ubuntu 12.04, I can build and install the same SANE-git locally and have it correctly match backends: $ ./SANE/bin/scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.24 $ scanimage -V scanimage (sane-backends) 1.0.22; backend version 1.0.22 On this computer the 1.0.24 works correctly in finding TPU2 on Epson V700. So what am I missing/doing wrong? (And I want to replace 1.0.22 with 1.0.24 for the whole system, the local build was just debugging.) Any help would be much appreciated. Edit 1: Just tried compiling SANE using this instruction on Ubuntu 10.04 and it worked like a charm. However, when I upgraded to 12.04 (really would like to run 12.04), SANE was downgraded to 1.0.22. When trying the same set of instructions on 12.04 I was still out of luck -- the backend missmatch was there again (and I do have libusb-dev installed) Edit 2: I updated to Ubuntu 12.10 which now has the 1.0.23 SANE drivers. I haven't dared trying to compile from source on 12.10 since 1.0.23 is good enough for me. This is just a work-around and I would still like to know what's up with Ubuntu 12.04.

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Keep Learning After Your Oracle Training Class is Over - Save 50%!

    - by KJones
    Written by Amit Kumar, Senior Director Oracle University Digital Training        Every training class you take about the latest Oracle application or technology moves you closer to developing the skills you need to succeed. But after class is over, how do you keep up with today’s accelerating pace of innovation? To   To keep with the very latest technological advances, you need an ongoing and flexible training solution.       One that lets you learn during your own downtime.       Knowledge that’s easy to access.       Interactive lessons where you connect with experts.       A simple way to increase your knowledge, on your own time and at your own pace. The new Oracle Learning Streams is the flexible training solution you're looking for. Continuously Learn with Oracle Learning Streams Over time, Oracle Learning Streams help you develop the depth and breadth of knowledge that will give you the tools to become an expert in your field. By taking advantage of comprehensive and frequently updated information, you can keep learning continuously, at your own pace, when it's convenient for you. Sign up today and get 12 months of unlimited access to: •    Hundreds of videos delivered by Oracle experts for fresh and continuous product learning•    Live connections with Oracle's top instructors•    Robust video search capability to find exactly what you’re looking for•    Features that allow you to build your own custom learning queue and request new content Oracle Learning Streams are now available for Oracle Database and Oracle Middleware. Take a moment to preview the content now.  For a Limited Time - Save 50% For a limited time, save 50% when you order Oracle Learning Streams with any other Oracle Classroom, Live Virtual Class or Training On Demand course. Now there is no reason for learning to stop when class is over!

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Modifying the SL/WIF Integration Bits to support Issued Token Credentials

    - by Your DisplayName here!
    The SL/WIF integration code that ships with the Identity Training Kit only supports Windows and UserName credentials to request tokens from an STS. This is fine for simple single STS scenarios (like a single IdP). But the more common pattern for claims/token based systems is to split the STS roles into an IdP and a Resource STS (or whatever you wanna call it). In this case, the 2nd leg requires to present the issued token from the 1st leg – this is not directly supported by the bits. But they can be easily modified to accomplish this. The Credential Fist we need a class that represents an issued token credential. Here we store the RSTR that got returned from the client to IdP request: public class IssuedTokenCredentials : IRequestCredentials {     public string IssuedToken { get; set; }     public RequestSecurityTokenResponse RSTR { get; set; }     public IssuedTokenCredentials(RequestSecurityTokenResponse rstr)     {         RSTR = rstr;         IssuedToken = rstr.RequestedSecurityToken.RawToken;     } } The Binding Next we need a binding to be used with issued token credential requests. This assumes you have an STS endpoint for mixed mode security with SecureConversation turned off. public class WSTrustBindingIssuedTokenMixed : WSTrustBinding {     public WSTrustBindingIssuedTokenMixed()     {         this.Elements.Add( new HttpsTransportBindingElement() );     } } WSTrustClient The last step is to make some modifications to WSTrustClient to make it issued token aware. In the constructor you have to check for the credential type, and if it is an issued token, store it away. private RequestSecurityTokenResponse _rstr; public WSTrustClient( Binding binding, EndpointAddress remoteAddress, IRequestCredentials credentials )     : base( binding, remoteAddress ) {     if ( null == credentials )     {         throw new ArgumentNullException( "credentials" );     }     if (credentials is UsernameCredentials)     {         UsernameCredentials usernname = credentials as UsernameCredentials;         base.ChannelFactory.Credentials.UserName.UserName = usernname.Username;         base.ChannelFactory.Credentials.UserName.Password = usernname.Password;     }     else if (credentials is IssuedTokenCredentials)     {         var issuedToken = credentials as IssuedTokenCredentials;         _rstr = issuedToken.RSTR;     }     else if (credentials is WindowsCredentials)     { }     else     {         throw new ArgumentOutOfRangeException("credentials", "type was not expected");     } } Next – when WSTrustClient constructs the RST message to the STS, the issued token header must be embedded when needed: private Message BuildRequestAsMessage( RequestSecurityToken request ) {     var message = Message.CreateMessage( base.Endpoint.Binding.MessageVersion ?? MessageVersion.Default,       IssueAction,       (BodyWriter) new WSTrustRequestBodyWriter( request ) );     if (_rstr != null)     {         message.Headers.Add(new IssuedTokenHeader(_rstr));     }     return message; } HTH

    Read the article

  • New computer hangs on shutdown/reboot, how to troubleshoot?

    - by Torben Gundtofte-Bruun
    My system is working perfectly but it freezes during shutdown/reboot/suspend/hibernate: All windows and the menu bar disappear but the desktop wallpaper remains. It doesn't even show the shutdown screen (the one with the animated dots) where I could hit ESC and watch the shutdown console text. The system is brand-new and fully updated using Update Manager. How can I determine what is causing the freeze? Is there a log I can investigate? How can I fix this? I see no obvious cause of the freeze. The only USB attachment is a mouse/keyboard; I don't have any external storage attached; and I don't have any programs running (the machine freezes even when doing shutdown right from the login screen). What I've tried so far: Based on other questions (this, this, and this) that suggest some ACPI settings, I've tried sudo shutdown -h now to see whether the shutdown console text display offers any hints, but the system doesn't even get that far - it still freezes while the screen shown the desktop background image, without any toolbars. Only sudo shutdown --force works, but that's not a solution. Editing the grub menu to add acpi=off to the kernel didn't help. I guess there's not much point in trying the other (lesser) ACPI suggestions? Adding noapic to the grub entry had no discernible effect. Adding nolapic instead did something (I had removed the quiet option) - the system managed to continue further with the shutdown, right until the line Checking for running unattended-upgrades: which were the last characters on the screen. I've also checked the system BIOS, especially regarding power options, but didn't see anything out of the ordinary. Switching the BIOS standby setting from S3 to S1 didn't help. The standby setting can't be disabled, and there are no other ACPI-related settings AFAIK. BIOS reset didn't help. Not surprised; hadn't changed anything. I tried going to a virtual console (CtrlAltF1) as suggested by djeikyb and from there did a shutdown -h now and it froze there too, after this console output. I didn't try killing processes one at a time because I'm still too newbie to figure out how to do that. Booting with kernel 2.6.35.22 rather than 2.6.35.25 didn't help. Disabling the Nvidia drivers didn't help. Booting from Live CD (USB stick in fact) didn't help; it freezes the same way. Booting from Live CD, with acpi=off noapic nolapic didn't help either. Neither did just nolapic. So evidently this is not some custom setting in my install, but some sort of basic issue. MemTest competed in 1 hour without errors.

    Read the article

  • OpenGL ES 2 jittery camera movement

    - by user16547
    First of all, I am aware that there's no camera in OpenGL (ES 2), but from my understanding proper manipulation of the projection matrix can simulate the concept of a camera. What I'm trying to do is make my camera follow my character. My game is 2D, btw. I think the principle is the following (take Super Mario Bros or Doodle Jump as reference - actually I'm trying to replicate the mechanics of the latter): when the caracter goes beyond the center of the screen (in the positive axis/direction), update the camera to be centred on the character. Else keep the camera still. I did accomplish that, however the camera movement is noticeably jittery and I ran out of ideas how to make it smoother. First of all, my game loop (following this article): private int TICKS_PER_SECOND = 30; private int SKIP_TICKS = 1000 / TICKS_PER_SECOND; private int MAX_FRAMESKIP = 5; @Override public void run() { loops = 0; if(firstLoop) { nextGameTick = SystemClock.elapsedRealtime(); firstLoop = false; } while(SystemClock.elapsedRealtime() > nextGameTick && loops < MAX_FRAMESKIP) { step(); nextGameTick += SKIP_TICKS; loops++; } interpolation = ( SystemClock.elapsedRealtime() + SKIP_TICKS - nextGameTick ) / (float)SKIP_TICKS; draw(); } And the following code deals with moving the camera. I was unsure whether to place it in step() or draw(), but it doesn't make a difference to my problem at the moment, as I tried both and neither seemed to fix it. center just represents the y coordinate of the centre of the screen at any time. Initially it is 0. The camera object is my own custom "camera" which basically is a class that just manipulates the view and projection matrices. if(character.getVerticalSpeed() >= 0) { //only update camera if going up float[] projectionMatrix = camera.getProjectionMatrix(); if( character.getY() > center) { center += character.getVerticalSpeed(); cameraBottom = center + camera.getBottom(); cameraTop = center + camera.getTop(); Matrix.orthoM(projectionMatrix, 0, camera.getLeft(), camera.getRight(), center + camera.getBottom(), center + camera.getTop(), camera.getNear(), camera.getFar()); } } Any thought about what I should try or what I am doing wrong? Update 1: I think I updated every value you can see on screen to check whether the jittery movement is affected by that, but nothing changed, so something must be fundamentally flawed with my approach/calculations.

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • Ubuntu Linux won't display netbook's native resolution

    - by Daniel
    FYI: My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is the OS doesn't seem to recognise the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as the Unity Launcher is visible enough. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, there appears a black band at the top of the screen and the desktop area is lowered, with bits of it hidden beneath the screen. I tried leaving it at this new resolution and restarting the system to see if the black band would disappear & the display will fit correctly, but it gets reset to 1024x768 at startup and displays following error: Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1)

    Read the article

  • Nvidia driver overscan issue second monitor via dvi-d cable

    - by benmichael
    Ok, I know that I have a bit of a bizarre setup, but here goes. I have an old laptop, HP Pavilion 6000. The graphics card in there is a GeForce 7150M. The monitor connection is an old 18pin. The external monitor I use is a Samsung SyncMaster 2333. Don't ask me why, but this monitor only has a dvi-d connection (yes, i have searched it). So I have the monitor plugged into the laptop. If I use any of the Nvidia propriety drivers and try to set the resolution up to 1920x1080 (the monitor's native resolution), I get a massive overscan issue. Over the years I have tried to get this to work, tinkering with my xorg.conf to death. I have also tried this on every Ubuntu since 10.04, on all the corresponding LUbuntus, and on all the Linux Mints since Lisa. Exact same issue. I have even tried it in WinDoze and it works perfectly there (although I did get the error once, but was unable to reproduce it). Using the Open Source drivers it works perfectly iff I switch off the laptop monitor (this makes no difference with the Nvidia drivers). I would have happily gone on using the Open Source drivers, except that since upgrading to LUbuntu 12.10, the Open Source drivers make my monitor completely hazy and have the same overscan issue until I (through the haze, only because I know where things are) go to the monitor settings, activate the laptop's monitor, then deactivate it, and suddenly it comes right. I have to do this every time. So I have to find a way to fix one of them, so I may as well tackle the propriety drivers, hence this overlong question. Amidst other things, I have tried the nvidia-settings, but because it is connected to an 18pin, it detects the monitor as a vga monitor and does not give me overscan correction options. I have tried custom modlines (although there are always more of those try), I have tried using xrandr, and I have tried all the FlatPanelOptions. What I have not tried is a Gentoo build, as I don't have time any more to do that installation, but up to about three years ago when I ran Gentoo exclusively I did not have this issue. Below is an link to an image with a red highlight around the portion of the screen visible to me, the numbers around it are the number of pixels which are cut off. This does seem to drift a few pixels every now and then. Thanks in advance. Nvidia driver issue image

    Read the article

  • Attaching new animations onto skeleton via props, a good idea?

    - by Cardin
    I'm thinking of coding a game with an idea of mine. I've coded 2D games before, but I'm new to 3D programming, so I'd like to ask if this idea of mine is feasible or out of my depth. I'm making a game where there are many different characters for the player to choose from (JRPG style). So to save time, I have an idea of creating many different varied characters using a completely naked body mesh and animation skeleton, standardised across all characters. For example, by placing different hair, boots, armor props on the character mesh, new characters can be formed. Kinda like playing dress-up with a barbie doll. I'm thinking this can be done by having a bone on the prop that I can programmically attach to the main mesh. Also, I plan to have some props add new animations to the base skeleton, so equipping some particular props would give it new attack, damage, idle animations. This is because I can't expect the character to have the same swinging animation if he had a big sword or an axe. I think this might be possible if the prop has its own instance of the animation skeleton with just only the new animations, and parenting the base body mesh to this new skeleton. So all the base body mesh has are just the basic animations, other animations come from the props. My concerns are, 1) the props might not attach to the mesh properly and jitter a lot, 2) since prop and body are animated differently, the props and base mesh will cause visual artefacts, like the naked thighs showing through the pants when the character walks, 3) a custom pipeline have to be developed to export skeletons without mesh, and also to attach the base body mesh to a new skeleton during runtime in the game. So my question: are these features considered 'easy' to code? Or am I trying to do something few have ever succeeded with on their own? It feels like all these can be done given enough time and I know I definitely have to do a bit of bone matrix calculations, but I really don't want to drag out the development timeline unnecessarily from coding mathematically intense things or analyzing how to parse 3D export formats. I'm currently only at the Game Design stage, so if these features aren't a good idea, I can simply change the design of the game. (Unrelated to question) I could always, as last resort, have the characters have predetermined outfit and weapon selections so as to animate everything manually.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • All hail the Excel Queen

    - by Tim Dexter
    An excellent question this past week from dear ol Blighty; actually from Brian at Nextgen Clearing Ltd in the big smoke (London). Brian was developing an excel template and wanted to be able to reference the data fields multiple times inside the Excel template. Damn good question and I of course has some wacky solutions, from macros and cell referencing in Excel to pre-processing the data with an XSL stylesheet to copy the data multiple times so it could be referenced multiple times. All completely outlandish, enter our Queen of Excel, Shirley from the development team. Shirley is singlehandedly responsible for the Excel templates, I put her through six months of hell a few years back, with a host of Excel template requirements. She was more than up to the challenge and has developed some great features. One of those, is the ability to use the hidden XDO_METADATA sheet to map the data to custom named fields so they can be used multiple times in the template. So simple and very neat! Excel template and regular Excel users will know that you can only use the naming function once ie the names have to be unique across the workbook so you can not reuse a cell/group name. To get around this you can just come up with as many cell names as you want and map them in the XDO_METADATA sheet to the data columns/fields in your XML data set:. For example: XDO_?DEPTNO_SUMMARY?  <?DEPTNO?> XDO_?DNAME_SUMMARY?  <?DNAME?> XDO_GROUP_?G_D_DETAIL? <xsl:for-each-group select=".//G_D" group-by="./DEPTNO"> XDO_?DEPTNO_DETAIL? <?DEPTNO?> As you can see DEPTNO has been referenced twice and mapped to different named values in the left hand column. These values can then be used to name individual cells in the Excel template. You'll also notice a mix of Publisher <? ...?> and native XSL commands. So the world is your oyster on the mapping and the complexity you might need for calculations or string manipulation. Shirley has kindly built out a sample Excel template, data and result here so you can see how it all hangs together. the XDO_METADATA sheet is hidden, just right click on the sheet names and use the Unhide command to show it.

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >