Search Results

Search found 12793 results on 512 pages for 'format specifiers'.

Page 382/512 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Commandline Purge in AS11

    - by Dheeraj Kumar
    AS11 - B2B offering consists of numerous features that have been made available via commandline approach. Most of these are supplement to the already available User Interface based approach. One such is purging of runtime data. The commandline purge option enables the users to purge the runtime data, based on various criteria. This is an ANT based command, provides the flexibility to selectively set the criteria to purge the runtime data. Providing the command line option also enables the administrator to purge in bulk, without visiting the B2B UI, which can also be used for automation purpose By default archival is turned on for purge activity. As a pre-requisite, the respective folder needs to be configured in database with the proper permission. When no filename is provided for archived data, the sysdate will be considered for filename. Below are the various options to purge the runtime data Normal 0 Option ANT option   Message state -Dmsgstate   Date range -Dfromdate,  -Dtodate Format : dd/mm/yyyy hh:mm AM/PM Trading partner -Dtp   Direction -Ddirection   Message Type -Dmsgtype   Agreement Name -Dagreement   IdType/ value -Didtype,  -Didvalue   Archive -Darchive True/false By default true Archive file name -Darchivename File name (optional), will be used when archive is set to true. Normal 0 Note: When using -Darchivename the value must be a unique file name. An existing file name used with -Darchivename throws an exception v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 Below are the few of ant commands and various options.   Purge based on date range and message state: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dfromdate="19/12/2009 1:04 AM" -Dtodate="19/12/2009 1:05 AM" -Dmsgstate=MSG_COMPLETE -Darchivename="filename.dmp"  Purge based on direction: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" Normal 0 Purge based on agreement Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dagreement="agreement_name" Normal 0 Purge based on Trading partner Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dtp=GlobalChips Normal 0 Purge based on Message State: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dmsgstate="MSG_COMPLETE" Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" -Dmsgstate="MSG_COMPLETE"

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • Network Your Computers & Devices: Step by Step

    - by The Geek
    If you’re looking for a great book to help you learn more about Windows home networking, there’s a new book on the market by our good friend Ciprian, and published by none other than Microsoft Press. Note: our friend Ciprian has been a guest contributor here on How-To Geek in the past, and he’s not only a geek that knows what he’s talking about, he’s also one of the more honest and decent people I’ve worked with. In his spare time, he runs the 7 Tutorials web site. The Book One of the great things about this book is that you aren’t limited to just Windows networking—it also explains how to connect Windows 7, XP, Vista, Mac OS X, and even Linux on the same network and share folders and devices between them. Everything in the book is written in a typical How-To Geek step-by-step format, with plenty of screenshots and pictures to help you through the process. Book Outline If you’re going to be spending some money on the book, you probably want to know what it’s all about, and since the Amazon page doesn’t give, well, much information at all, here’s the entire outline for you: Setting Up a Router and Devices Setting User Account on All Computers Setting Up Your Libraries on All Windows 7 Computers Creating the Network Customizing Network Sharing Settings in Windows 7 Creating the Homegroup and Joining Windows 7 Computers Sharing Libraries and Folders Sharing and Working with Devices Streaming Media Over the Network and the Internet Sharing Between Windows XP, Windows Vista, and Windows 7 Computers Sharing Between Mac OS X and Windows 7 Computers Sharing Between Ubuntu Linux and Windows 7 Computers Keeping the Network Secure Setting Up Parental Controls Troubleshooting Network and Internet Problems It’s a great book, with loads of information, and compared to most tech books isn’t very expensive—only $19.79 for the paperback and $9.99 for the Kindle version. Well worth it, and hey, it’s an official Microsoft Press book—written by a How-To Geek guest author. Network Your Computers & Devices Step by Step [Amazon] Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC 2011 International Space Station Calendar Available for Download (Free) Ultimate Elimination – Lego Black Ops [Video] BotSync Enables Secure FTP File Synchronization on Android Devices Enjoy Beautiful City Views with the Cityscape Theme for Windows 7 Luigi Installs Any OS on Google’s Cr-48 Notebook DIY iPad Stylus Offers Pen-Based Interaction on the Cheap

    Read the article

  • Commit Review Questions

    - by Wes McClure
    Note: in this article when I refer to a commit, I mean the commit you plan to share with the rest of the team, if you have local commits that you plan to amend/combine, I am referring to the final result. In time you will find these easier to do as you develop, however, all of these are valuable before checking in!  The pre commit review is a nice time to polish what might have been several hours of intense work, during which these things were the last things on your mind!  If you are concerned about losing your work in the process of responding to these questions, first do a check-in and amend it as you go (assuming you are using a tool such as git that supports this), rolling the result into one nice commit for everyone else.  Did you review your commit, change by change, with a diff utility? If not, this is a list of reasons why you might want to start! Did you test your changes? If the test is valuable to be automated, is it? If it’s a manual testing scenario, did you at least try the basics manually? Are the additions/changes formatted consistently with the rest of the project? Lots of automated tools can help here, don’t try to manually format the code, that’s a waste of time and as a human you will fail repeatedly. Are these consistent: tabs versus spaces, indentation, spacing, braces, line breaks, etc Resharper is a great example of a tool that can automate this for you (.net) Are naming conventions respected? Did you accidently use abbreviations, unless you have a good reason to use them? Does capitalization match the conventions in the project/language? Are files partitioned? Sometimes we add new code in existing files in a pinch, it’s a good idea to split these out if they don’t belong ie: are new classes defined in new files, if this is something your project values? Is there commented out code? If you are removing an existing feature, get rid of it, that is why we have VCS If it’s not done yet, then why are you checking it in? Perhaps a stash commit (git)? Did you leave debug or unnecessary changes? Do you understand all of the changes? http://geekswithblogs.net/wesm/archive/2012/04/11/programming-doesnrsquot-have-to-be-magic.aspx Are there spelling mistakes? Including your commit message! Is your commit message concise? Is there follow up work? Are there tasks you didn’t write down that you need to follow up with? Are readability or reorganization changes needed? This might be amended into the final commit, or it might be future work that needs added to the backlog. Are there other things your team values that you should review?

    Read the article

  • Part 1 Basic Webtrends REST Examples

    - by GeekAgilistMercenary
    In this entry I just want to cover some examples of how to connect to Webtrends DX Web Services.  The DX Web Services use REST as the architecture, providing simple URI based end points to connect to.  With the Webtrends SDK you can connect to these services with your account information.  Here are the basic steps to retrieve a profile list, the reports from one of those profiles, and then the report you want from that report list. First step is to create a Webtrends User. WebTrends.Sdk.Account.User webtrendsUser = new Account.User(); webtrendsUser.UserName = username; webtrendsUser.Password = password; webtrendsUser.AccountName = account; After you create the Webtrends User, simple request a profile list by getting list of ProfileDefinition Objects. List<WebTrends.Sdk.Profile.ProfileDefinition> profiles = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(webtrendsUser); Next you will want to grab a report based on the profile you are in and your credentials. List<WebTrends.Sdk.Report.ReportDefinition> reports = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(profiles[i], webtrendsUser); In the code above, i would equate to the specific profile you want from the retrieved list of profiles in the profiles list.  The common scenario is that one has pulled the profiles into a drop down, combo, or list box that the user can select.  Then when the user selects the specific profile that profile object can then be used to pull the List of ReportDefinitions. Once we have the report definitions, all sorts of criteria can be added together to query for a specific report.  This is also were things can get a little tricky.  For instance, take a look at the code below. WebTrends.Sdk.Factory.ReportFactory.CreateDimensionalReport( report.ID.ToString(), profiles[i].ID.ToString(), "2010m01", webtrendsUser); The CreateDimensionalReport takes 4 parameters for this particular overload.  The report ID, profile ID, the Webtrends Date Format, and the Webtrends User Object.  There are a number of other overloads available within this factory's method that allow for passing the specific REST URI, and other criteria to retrieve the report of your choice.  In the near future we will be adding some more to this method also, which will provide more flexibility without needing to use the full REST URI. I will have more on this, so all you Coders out there using Webtrends DX Services, I hope this is helpful!  Enjoy. Original Entry

    Read the article

  • SQL SERVER – Get 2 of My Books FREE at Koenig Tech Day – Where Technologies Converge!

    - by pinaldave
    As a regular reader of my blog – you must be aware of that I love to write books and talk about various subjects of my book. The founders of Koenig Solutions are my very old friends, I know them for many years. They have been my biggest supporter of my books. Coming weekend they have a technology event at their Bangalore Location. Every attendee of the technology event will get a set of two books worth Rs. 450 – ‘SQL Server Interview Questions And Answers‘ and ‘SQL Wait Stats Joes 2 Pros‘. I am going to cover a couple of topics of the books and present  as well. I am very confident that every attendee will be having a great time. I will be covering following subjects: SQL Server Tricks and Tips for Blazing Fast Performance Slow Running Queries (SQL) are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how SQL Server has been set up. The session will focus on the ways of identifying problems that slow down SQL Servers, and tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. After the session is over – I will point to what exact location in the book where you can continue for the further learning. I am pretty excited, this is more like book reading but in entire different format. The one day event will cover four technologies in four separate interactive sessions on: Microsoft SQL Server Security VMware/Virtualization ASP.NET MVC Date of the event: Dec 15, 2012 9 AM to 6PM. Location of the event:  Koenig Solutions Ltd. # 47, 4th Block, 100 feet Road, 3rd Floor, Opp to Shanthi Sagar, Koramangala, Bangalore- 560034 Mobile : 09008096122 Office : 080- 41127140 Organizers have informed me that there are very limited seats for this event and technical session based on my book will start at Sharp 9 AM. If you show up late there are chances that you will not get any seats. Registration for the event is a MUST. Please visit this link for further information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Advantages and disadvantages of building a single page web application

    - by ryanzec
    I'm nearing the end of a prototyping/proof of concept phase for a side project I'm working on, and trying to decide on some larger scale application design decisions. The app is a project management system tailored more towards the agile development process. One of the decisions I need to make is whether or not to go with a traditional multi-page application or a single page application. Currently my prototype is a traditional multi-page setup, however I have been looking at backbone.js to clean up and apply some structure to my Javascript (jQuery) code. It seems like while backbone.js can be used in multi-page applications, it shines more with single page applications. I am trying to come up with a list of advantages and disadvantages of using a single page application design approach. So far I have: Advantages All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application. More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing. Disadvantages Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript. Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read. Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them. There are also other things that are kind of double edged swords. For example, with single page applications, the data processed for each request can be a lot less since the application will be asking for the minimum data it needs for the particular request, however it also means that there could be a lot more small request to the server. I'm not sure if that is a good or bad thing. What are some of the advantages and disadvantages of single page web applications that I should keep in mind when deciding which way I should go for my project?

    Read the article

  • Action delegate in C#

    - by Jalpesh P. Vadgama
    In last few posts about I have written lots of things about delegates and this post is also part of that series. In this post we are going to learn about Action delegates in C#.  Following is a list of post related to delegates. Delegates in C#. Multicast Delegates in C#. Func Delegates in C#. Action Delegates in c#: As per MSDN action delegates used to pass a method as parameter without explicitly declaring custom delegates. Action Delegates are used to encapsulate method that does not have return value. C# 4.0 Action delegates have following different variants like following. It can take up to 16 parameters. Action – It will be no parameter and does not return any value. Action(T) Action(T1,T2) Action(T1,T2,T3) Action(T1,T2,T3,T4) Action(T1,T2,T3,T4,T5) Action(T1,T2,T3,T4,T5,T6) Action(T1,T2,T3,T4,T5,T6,T7) Action(T1,T2,T3,T4,T5,T6,T7,T8) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16) So for this Action delegate you can have up to 16 parameters for Action.  Sound interesting!!… Enough theory now. It’s time to implement real code. Following is a code for that. using System; using System.Collections.Generic; namespace DelegateExample { class Program { static void Main(string[] args) { Action<String> Print = p => Console.WriteLine(p); Action<String,String> PrintAnother = (p1,p2)=> Console.WriteLine(string.Format("{0} {1}",p1,p2)); Print("Hello"); PrintAnother("Hello","World"); } } } In the above code you can see that I have created two Action delegate Print and PrintAnother. Print have one string parameter and its printing that. While PrintAnother have two string parameter and printing both the strings via Console.Writeline. Now it’s time to run example and following is the output as expected. That’s it. Hope you liked it. Stay tuned for more updates!!

    Read the article

  • Apache2 Syntex, cant acces 000-default

    - by enrique2334
    I have been using Apache2 and webmin with my raspberry pi. after a restart and reinstalations apache wont start. > sudo /etc/init.d/apache2 restart apache2: Syntax error on line 268 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/sites-enabled/000-default: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! The file 000-default is there and unopenable permisions to root-root My apache2.conf file looks like this (bottom half) # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see the comments above for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/ <VirtualHost *:80> DocumentRoot /var/www <Directory /var/www> allow from all Options +Indexes </Directory> ServerName IMASERVER </VirtualHost> does anyone know what the cause of this?

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

  • Heightmap in Shader not working

    - by CSharkVisibleBasix
    I'm trying to implement GPU based geometry clipmapping and have problems to apply a simple heightmap to my terrain. For the heightmap I use a simple texture with the surface format "single". I've taken the texture from here. To apply it to my terrain, I use the following shader code: texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; Vertex Shader: float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; The complete vertex shader (also containig clipmapping transforms) looks like this: float4x4 View : View; float4x4 Projection : Projection; float3 CameraPos : CameraPosition; float LODscale; //The LOD ring index 0:highest x:lowest float2 Position; //The position of the part in the world texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; //Accept only float2 because we extract 3rd dim out of Heightmap float4 wireframe_VS(float2 pos : POSITION) : POSITION{ float4x4 worldMatrix = float4x4( LODscale, 0, 0, 0, 0, LODscale, 0, 0, 0, 0, LODscale, 0, - Position.x * 64 * LODscale + CameraPos.x, 0, Position.y * 64 * LODscale + CameraPos.z, 1); float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; float4 viewPos = mul(worldPos, View); float4 projPos = mul(viewPos, Projection); return projPos; }

    Read the article

  • SQL – What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit

    - by Pinal Dave
    We love puzzles. One of the brain’s main task is to solve puzzles. Sometime puzzles are very complicated (e.g Solving Rubik Cube or Sodoku)  and sometimes the puzzles are very simple (multiplying 4 by 8 or finding the shortest route while driving). It is always to solve puzzle and it creates an experience which humans are not able to forget easily. The best puzzles are the one where one has to do multiple things to reach to the final goal. Let us do something similar today. We will have a contest where you can participate and win something interesting. Contest This contest have two parts. Question 1: What ACID stands in the Database? This question seems very easy but here is the twist. Your answer should explain minimum one of the properties of the ACID in detail. If you wish you can explain all the four properties of the ACID but to qualify you need to explain minimum of the one properties. Question 2: What is the size of the installation file of NuoDB for any specific platform. You can answer this question following format – NuoDB installation file is of size __ MB for ___ Platform. Click on the Download the Link and download your installation file for NuoDB. You can post figure out the file size from the properties of the file. We have exciting content prizes for the winners. Prizes 1) 24 Amazon Gift Cards of USD 10 for next 24 hours. One card at every hour. (Open anywhere in the world) 2) One grand winner will get Joes 2 Pros SQL Server 2012 Training Kit worth USD 249. (Open where Amazon ship books). Amazon | 1 | 2 | 3 | 4 | 5  Rules The contest will be open till July 21, 2013. All the valid comments will be hidden till the result is announced. The winners will be announced on July 24, 2013. Hint: Download NuoDB  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Any suggested approaches to track bugs/defects?

    - by deostroll
    What is the best way to track defect sources in tfs? We have various teams for a project like the vulnerability team, the customer, pre-sales, etc. We give a build and these teams independently test it. They do not have access to our tfs system. So they usually send in their defects via email. It will usually be send in an excel format. Our testing team takes these up and logs them into tfs. Sometimes they modify the original defect description (in excel) and add the expected/actual results. Sometimes they miss to cite the source. I am talking about managing the various sources as such. Is there a way we can add these sources into tfs, and actually link this particular source with the defects, with individual comments associated with them (saying where in the source we can find the actual material for the defect). Edit: I don't know if there is a way to manage various sources. Consider this: the vulnerability assessment team has come out with defects/suggestions. They captured it into an excel and passed that on to the testing team (in my case). The testing team takes the responsibility of elaborating the defect and logging it in tfs. Now say that the excel has come with 20 defect items. This is my source. (It answers the question where did this defect come from). So ultimately when I am looking at a bug I know from where it came from - I'll ultimately be looking at the email sent from the VA team which has the excel or the excel file itself sent by the VA team. It may be one of the 20 items in that excel. How should the tester link to this source just once? On the contrary, it does not make sense for the tester to attach the same excel 20 times (i.e. attach the same excel for the 20 defects while logging it into tfs) right? I hope you get my point.

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • Memory Glutton

    - by AreYouSerious
    I have to admit that I can't get enough storage. I have hard drives just sitting around in case I need to move somthing, or I'm going to a friends and either they want something I have or I want something they might have. What I'm going to talk about today is cost effective memory for devices. I don't know how this particualr device will work in a camera, as That's not what I use in my camera, in fact I don't have a camera that doesn't either use SD, or the old compact flash card, that's not so compact anymore. There's this thing that uses two micro sd cards to double the capacity of your memory, and it costs about 4 bucks, without the Micro SD card. I have had one for about a year and was going to throw it away because I couldn't get it to work with my computer, or with my Sony Reader. However I found out by one last ditch effort that this thing works beautifully with my Sony PSP. there is no software to speak of associated with this thing, you simply put in two SD cards of the same size... (if you put in two different sizes it will still work, you'll only double the smallest cards size though) and format through the psp. Viola you know have a 29 GB memory card for your PSP. why is this important ? well for starters you can carry more music and more videos. Second if you have gone the way of the hacker.... you can store more games on your card... There are just a few things you have to note.... I speak from experience... you have to use the usb connection to the PSP to do any file moving, as I said previously said card doesn't play well with my computers or card readers... I not saying it won't work at all, just hasn't work with anything I own. Second. If for some reason you try to Hack/crack your PSP don't attempt to delete a game from the psp, use the usb file browser to remove games. if you delete from the PSP you are likely to have to move all your files off, reformat and start again... just a couple things I have noticed... if I had done something like that.   anyway, Here's a link.... http://www.photofast-adapter.com/  and if you want to buy one, get it off ebay, I've seen them as low as $1.99

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • String contains trailing zeroes when converted from decimal [migrated]

    - by Locke
    I've run into an unusual quirk in a program I'm writing, and I was trying to figure out if anyone knew the cause. Note that fixing the issue is easy enough. I just can't figure out why it is happening in the first place. I have a WinForms program written in VB.NET that is displaying a subset of data. It contains a few labels that show numeric values (the .Text property of the labels are being assigned directly from the Decimal values). These numbers are being returned by a DLL I wrote in C#. The DLL calls a webservice which initially returns the values in question. It returns one as a string, the other as a decimal (I don't have any control over the webservice, I just consume it). The DLL assigns these to properties on an object (both of which are decimals) then returns that object back to the WinForm program that called the DLL. Obviously, there's a lot of other data being consumed from the webservice, but no other operations are happening which could modify these properties. So, the short version is: WinForm requests a new Foo from the DLL. DLL creates object Foo. DLL calls webservice, which returns SomeOtherFoo. //Both Foo.Bar1 and Foo.Bar2 are decimals Foo.Bar1 = decimal.Parse(SomeOtherFoo.Bar1); //SomeOtherFoo.Bar1 is a string equal to "2.9000" Foo.Bar2 = SomeOtherFoo.Bar2; //SomeOtherFoo.Bar2 is a decimal equal to 2.9D DLL returns Foo to WinForm. WinForm.lblMockLabelName1.Text = Foo.Bar1 //Inspecting Foo.Bar1 indicates my value is 2.9D WinForm.lblMockLabelName2.Text = Foo.Bar2 //Inspecting Foo.Bar2 also indicates I'm 2.9D So, what's the quirk? WinForm.lblMockLabelName1.Text displays as "2.9000", whereas WinForm.lblMockLabelname2.Text displays as "2.9". Now, everything I know about C# and VB indicates that the format of the string which was initially parsed into the decimal should have no bearing on the outcome of a later decimal.ToString() operation called on the same decimal. I would expect that decimal.Parse(someDecimalString).ToString() would return the string without any trailing zeroes. Everything I find online seems to corroborate this (there are countless Stack Overflow questions asking exactly the opposite...how to keep the formatting from the initial parsing). At the moment, I've just removed the trailing zeroes from the initial string that gets parsed, which has hidden the quirk. However, I'd love to know why it happens in the first place.

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Ubuntu 11.10 doesn't detect external usb hard drive

    - by Andrew
    I have been batting with this issue for a bit and cannot find the answer to it. So the Dmesg see's the device, being Symwave WDC WD64.... media@Media-pc:~$ dmesg | tail -n 20 [78678.719497] scsi 10:0:0:0: Direct-Access Generic- USB3.0 CRW -0 1.00 PQ: 0 ANSI: 0 CCS [78678.725621] scsi 10:0:0:1: Direct-Access Generic- USB3.0 CRW -1 1.00 PQ: 0 ANSI: 0 CCS [78684.073837] scsi 11:0:0:0: Direct-Access SYMWAVE WDC WD6400AAKS-0 3B01 PQ: 0 ANSI: 4 [78691.008126] scsi 11:0:0:0: uas_eh_abort_handler tag 0 [78691.008139] scsi 11:0:0:0: uas_eh_device_reset_handler tag 0 [78691.008147] scsi 11:0:0:0: uas_eh_target_reset_handler tag 0 [78691.008154] scsi 11:0:0:0: uas_eh_bus_reset_handler tag 0 [78691.080307] usb 2-2.4: reset high speed USB device number 9 using ehci_hcd [78691.221427] scsi 11:0:0:0: Device offlined - not ready after error recovery [78691.221498] scsi 11:0:0:0: rejecting I/O to offline device [78691.221519] scsi 11:0:0:0: rejecting I/O to offline device [78691.222952] scsi 11:0:0:1: Enclosure SYMWAVE SES 3B01 PQ: 0 ANSI: 4 [78691.223156] scsi 11:0:0:2: uas_sense_old: urb length 26 disagrees with IU sense data length 510, using 18 bytes of sense data [78691.225061] sd 11:0:0:0: Attached scsi generic sg3 type 0 [78691.225344] ses 11:0:0:1: Attached Enclosure device [78691.225495] ses 11:0:0:1: Attached scsi generic sg4 type 13 [78691.226266] sd 10:0:0:0: Attached scsi generic sg5 type 0 [78691.226653] sd 10:0:0:1: Attached scsi generic sg6 type 0 [78691.241647] sd 10:0:0:0: [sdd] Attached SCSI removable disk [78691.243832] sd 10:0:0:1: [sde] Attached SCSI removable disk It looks like it attaches sdd and sde. Now when i look in the disk utility it shows "Hard disk Symwave WD6400AAKS-0 device /dev/sdc doesn't show any other info then that, if i format, it says that it cannot open /dev/sdc no device or address error. Underneeth the device it does show two general usb3.0 CRW that are sdd and sde. Now if I do a fdisk -l it doesn't show the device: media@Media-pc:~$ sudo fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000247de Device Boot Start End Blocks Id System /dev/sda1 * 2048 152176639 76087296 83 Linux /dev/sda2 152178686 156301311 2061313 5 Extended /dev/sda5 152178688 156301311 2061312 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x948fc822 Device Boot Start End Blocks Id System /dev/sdb1 63 1953520064 976760001 7 HPFS/NTFS/exFAT So now I am confused. Any ideas how I get fdisk to see the device?

    Read the article

  • Responsive Design for your ADF Faces Web Applications

    - by Shay Shmeltzer
    Responsive web applications are a common pattern for designing web pages that adjust their UI based on the device that access them. With the increase in the number of ADF applications that are being accessed from mobile phones and tablet we are getting more and more questions around this topic. Steven Davelaar wrote a comprehensive article covering key concepts in this area that you can find here. The article focuses on what I would refer to as server adaptive application, where the server adapts the UI it generates based on the device that is accessing the server. However there is one more technique that is not covered in that article and can be used with Oracle ADF - it is CSS manipulation on the client that can achieve responsive design. I'll cover this technique in this blog entry. The main advantage of this technique is that the UI manipulation does not require the server to send over a new UI when a change is needed. This for example allows your page to change immediately when you change the orientation of your device. (By the way this example was developed for one of the seminars in the upcoming Oracle ADF OTN Virtual Developer Day). In the demo that you'll see below you'll see a single page that changes the way it is displayed based on the orientation of the device. Here is the page with the tablet in landscape and portrait: To achieve this I'm using a CSS media query in my page template that changes the display property of a couple of style classes that are used in my page. The media query has this format: @media screen and (max-width:700px) {            .narrow {                display: inline;            }            .wide {                display: none;            }            .adjustFont {                font-size: small;            }            .icon-home {                font-size: 24px;            }        } This changes the properties of the same styleClasses that are defined in my application's skin. Here is a quick demo video that shows you the full application and explains how it works. For those looking to replicate this, here are the basic files: skin1.css @charset "UTF-8";/**ADFFaces_Skin_File / DO NOT REMOVE**/@namespace af "http://xmlns.oracle.com/adf/faces/rich";@namespace dvt "http://xmlns.oracle.com/dss/adf/faces";.wide {    display: inline;}.narrow {    display: none;}.adjustFont {    font-size: large;}.icon-home {        font-family: 'UIShellUGH';    -webkit-font-smoothing: antialiased;        font-size: 36px;        color: #ffa000;} pageTemplate: <?xml version='1.0' encoding='UTF-8'?><af:pageTemplateDef xmlns:af="http://xmlns.oracle.com/adf/faces/rich" var="attrs" definition="private"                    xmlns:afc="http://xmlns.oracle.com/adf/faces/rich/component">    <af:xmlContent>        <afc:component>            <afc:description>A template that will work on phones and desktop</afc:description>            <afc:display-name>ResponsiveTemplate</afc:display-name>            <afc:facet>                <afc:facet-name>main</afc:facet-name>            </afc:facet>        </afc:component>    </af:xmlContent>    <meta name="viewport" content="width=device-width, initial-scale=1"/>    <af:resource type="css">@media screen and (max-width:700px) {            .narrow {                display: inline;            }            .wide {                display: none;            }            .adjustFont {                font-size: small;            }            .icon-home {                font-size: 24px;            }        }@font-face {            font-family: 'UIShellUGH';            src: url(data:application/x-font-woff;charset=utf-8;base64,d09GRk9UVE8AA..removed code here...AzV6b1g==)format('truetype');            font-weight: normal;            font-style: normal;        }    </af:resource>    <af:panelGroupLayout id="pt_pgl4" layout="vertical" styleClass="sizeStyle">        <af:panelGridLayout id="pt_pgl1">            <af:gridRow marginTop="5px" height="40px" id="pt_gr1">                <af:gridCell marginStart="5px" width="100%" marginEnd="5px" id="pt_gc1">                    <af:panelGroupLayout id="pt_pgl3" halign="center" layout="horizontal">                        <af:outputText value="h" id="ot2" styleClass="icon-home"/>                        <af:outputText value="HR System" id="ot3" styleClass="adjustFont"/>                    </af:panelGroupLayout>                </af:gridCell>            </af:gridRow>            <af:gridRow marginTop="5px" height="auto" id="pt_gr2">                <af:gridCell marginStart="5px" width="100%" marginEnd="5px" id="pt_gc2" halign="stretch">                    <af:panelGroupLayout id="pt_pgl2" layout="scroll">                        <af:facetRef facetName="main"/>                    </af:panelGroupLayout>                </af:gridCell>            </af:gridRow>            <af:gridRow marginTop="5px" height="20px" marginBottom="5px" id="pt_gr3">                <af:gridCell marginStart="5px" width="100%" marginEnd="5px" id="pt_gc3">                    <af:panelGroupLayout id="pt_pgl5" layout="vertical" halign="center">                        <af:separator id="pt_s1"/>                        <af:outputText value="Copyright Oracle Corp. 2013" id="pt_ot1" styleClass="adjustFont"/>                    </af:panelGroupLayout>                </af:gridCell>            </af:gridRow>        </af:panelGridLayout>    </af:panelGroupLayout></af:pageTemplateDef> Example from the page:                         <af:gridRow id="gr3">                            <af:gridCell id="gc7" columnSpan="2">                                <af:panelGroupLayout id="pgl8" styleClass="narrow">                                    <af:link text="Menu" id="l1">                                        <af:showPopupBehavior triggerType="action" popupId="p1" align="afterEnd"/>                                    </af:link>                                </af:panelGroupLayout>                                <af:panelGroupLayout id="pgl7" styleClass="wide">                                    <af:navigationPane id="np1" hint="buttons">                                        <af:commandNavigationItem text="Departments" id="cni1"/>                                        <af:commandNavigationItem text="Employees" id="cni2"/>                                        <af:commandNavigationItem text="Salaries" id="cni3"/>                                        <af:commandNavigationItem text="Jobs" id="cni4"/>                                        <af:commandNavigationItem text="Services" id="cni5"/>                                        <af:commandNavigationItem text="Support" id="cni6"/>                                        <af:commandNavigationItem text="Help" id="cni7"/>                                    </af:navigationPane>                                </af:panelGroupLayout>                            </af:gridCell>                        </af:gridRow>

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Oracle Partner Day 2012: Neue Geschäftschancen warten auf Sie!

    - by A&C Redaktion
    Wie gut kennen Sie die Neuerungen der Oracle "One Red Stack"-Unternehmensstrategie? In Zukunft werden Sie, als unser Partner, das gesamte Oracle Produktportfolio – Software, Hardware und Applications – verkaufen können! Ihr Profit: die neue Vielfalt. Toppen Sie Ihre Marktpräsenz! Erschließen Sie sich neue Märkte, neue Kundengruppen. Potenzieren Sie Ihren Erfolg in Zukunft! Maximize your Potential – das ist Ihr Stichwort für das Geschäftsjahr und unser Motto für den Oracle Partner Day am 29. Oktober 2012 in der Commerzbank Arena in Frankfurt. Erleben Sie in unseren Breakout Sessions, wo das Vertriebs-Plus für Sie liegt. Was Ihr Kunde wissen muss. Und wo Sie überzeugen können. Alle parallel laufenden Breakout Sessions werden wiederholt, damit Sie in jedem Fall daran teilnehmen können. Konzentriert auf Erfolg: das neue Oracle Alliances & Channel-Konzept Wir liefern Ihnen die entscheidenden Argumente für Kunden, die auf Nachhaltigkeit und Investitionssicherheit setzen. Für Sie, wenn Sie in Zukunft mehr erreichen wollen! Für alle, die Partner Excellence aus einer Hand anbieten können. Erfahren Sie die Produktneuheiten von der Oracle Open World (OOW) in San Francisco (30. September bis 4. Oktober 2012) aus erster Hand. In der Expert- und Partner Service-Zone finden Sie Antworten zu allen Themenschwerpunkten. Nutzen Sie dazu das neue Speed-Dating-Format, um schnell den richtigen Ansprechpartner für Ihre Fragen zu Vertrieb und Produkten zu finden. Machen Sie den Test. Wir zahlen die Testgebühr! Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt an zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day. Wählen Sie Ihre Fachbereiche Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Kommen Sie zum Oracle Partner Day 2012 – aktives Partner Networking, Management Kontakte und Expertenwissen inklusive! Sichern Sie sich jetzt einen der begehrten Plätze und Ihre Teilnahme – auch am Test! Die Teilnahme ist für Sie als Oracle Partner selbstverständlich kostenfrei. Hier finden Sie weitere Informationen zum Oracle Partner Day und den Link zur Registrierung. Wir freuen uns auf Sie! Ihr Christian Werner Senior Director Alliances & Channels Germany P.S.: Direkt nach dem Oracle Partner Day findet der Oracle Day für Endkunden statt. Sie als Partner können gerne an dieser Veranstaltung gemeinsam mit Ihren Kunden teilnehmen, die Plätze sind limitiert. Hier finden Sie weitere Infos zum Oracle Day.

    Read the article

  • Oracle Partner Day 2012: Neue Geschäftschancen warten auf Sie!

    - by A&C Redaktion
    Wie gut kennen Sie die Neuerungen der Oracle "One Red Stack"-Unternehmensstrategie? In Zukunft werden Sie, als unser Partner, das gesamte Oracle Produktportfolio – Software, Hardware und Applications – verkaufen können! Ihr Profit: die neue Vielfalt. Toppen Sie Ihre Marktpräsenz! Erschließen Sie sich neue Märkte, neue Kundengruppen. Potenzieren Sie Ihren Erfolg in Zukunft! Maximize your Potential – das ist Ihr Stichwort für das Geschäftsjahr und unser Motto für den Oracle Partner Day am 29. Oktober 2012 in der Commerzbank Arena in Frankfurt. Erleben Sie in unseren Breakout Sessions, wo das Vertriebs-Plus für Sie liegt. Was Ihr Kunde wissen muss. Und wo Sie überzeugen können. Alle parallel laufenden Breakout Sessions werden wiederholt, damit Sie in jedem Fall daran teilnehmen können. Konzentriert auf Erfolg: das neue Oracle Alliances & Channel-Konzept Wir liefern Ihnen die entscheidenden Argumente für Kunden, die auf Nachhaltigkeit und Investitionssicherheit setzen. Für Sie, wenn Sie in Zukunft mehr erreichen wollen! Für alle, die Partner Excellence aus einer Hand anbieten können. Erfahren Sie die Produktneuheiten von der Oracle Open World (OOW) in San Francisco (30. September bis 4. Oktober 2012) aus erster Hand. In der Expert- und Partner Service-Zone finden Sie Antworten zu allen Themenschwerpunkten. Nutzen Sie dazu das neue Speed-Dating-Format, um schnell den richtigen Ansprechpartner für Ihre Fragen zu Vertrieb und Produkten zu finden. Machen Sie den Test. Wir zahlen die Testgebühr! Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt an zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day. Wählen Sie Ihre Fachbereiche Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Kommen Sie zum Oracle Partner Day 2012 – aktives Partner Networking, Management Kontakte und Expertenwissen inklusive! Sichern Sie sich jetzt einen der begehrten Plätze und Ihre Teilnahme – auch am Test! Die Teilnahme ist für Sie als Oracle Partner selbstverständlich kostenfrei. Hier finden Sie weitere Informationen zum Oracle Partner Day und den Link zur Registrierung. Wir freuen uns auf Sie! Ihr Christian Werner Senior Director Alliances & Channels Germany P.S.: Direkt nach dem Oracle Partner Day findet der Oracle Day für Endkunden statt. Sie als Partner können gerne an dieser Veranstaltung gemeinsam mit Ihren Kunden teilnehmen, die Plätze sind limitiert. Hier finden Sie weitere Infos zum Oracle Day.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >