Search Results

Search found 2501 results on 101 pages for 'alex ford'.

Page 10/101 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • Automatic desktop/work environment setup

    - by Alex
    I have this strange thing I am trying to do, so before I jump into it I was curious if someone knows about existing solution or maybe have an advice as far as implementation. I run a small software company and as it happens I often do very different type of work. When I do coding for Java project I need Eclipse running and maybe VM with something like ActiveMQ server or whatever, plus terminals to tail -F log files specific to the application, etc. When I do something like weekly progress review with my team I need a few browser windows open and a gedit to take notes and so on. Depending on the type of work I am doing I generally have all of the related apps open in multiple different Workspaces. So in the example above Eclipse would be open in Workspace 1, terminals would be sharing Workspace 2 and so on. What I am trying to do is to automate opening of all these applications, positinoning them on the screen and assigning them to proper Workspaces. My current idea consists of having a Shell script that launches specific apps depending on what type of work I am about to start doing. Is there anything to aid this type of automation? Or is my only option is just a shell scripting at this point? My current system is Ubuntu 10.04

    Read the article

  • Freeing disk space on Ubuntu to use in Windows

    - by Alex
    I have 250Gb drive on a laptop, which has Windows 7 on a 122Gb ntfs partition (which has a "boot" flag on it) and Ubuntu 12.04.1 on a 110Gb extended partition, of which the root ext4 partition is 108Gb and the swap is 1.74Gb. You can see everything in the screenshot below. My question is: I want to diminish the size of the linux root partition and then use that space to increase the windows partition. How do I do that? Also, is it possible to increase the size of the swap partition and not do any damage? If so, how? I'm using GParted, and i'd say i'm pretty confident with it. Screenshot of my partitions

    Read the article

  • 2D Procedural Terrain with box2d Assets

    - by Alex
    I'm making a game that invloves a tire moving through terrain that is generated randomly over 1000 points. The terrain is always a downwards incline like so: The actual box2d terrain extends one screen width behind and infront of the circular character. I'm now at a stage where I need to add gameplay elements to the terrain such as chasms or physical objects (like the demo polygon in the picture) and am not sure of the best way to structure the procedural generation of the terrain and objects. I currently have a very simple for loop like so: for(int i = 0; i < kMaxHillPoints; i++) { hillKeyPoints[i] = CGPointMake(terrainX, terrainY); if(i%50 == 0) { i += [self generateCasmAtIndex:i]; } terrainX += winsize.width/20; terrainY -= random() % ((int) winsize.height/20); } With the generateCasmAtIndex function add points to the hillKeyPoints array and incrementing the for loop by the required amount. If I want to generate box2d objects as well for specific terrain elements, I'll also have to keep track of the current position of the player and have some sort of array of box2d objects that need to be created at certain locations. I am not sure of an efficient way to accomplish this procedural generation of terrain elements with accompanying box2d objects. My thoughts are: 1) Have many functions for each terrain element (chasm, jump etc.) which add elements to be drawn to an array that is check on each game step - similar to what I've shown above. 2) Create an array of terrain element objects that string together and are looped over to create the terrain and generate the box2d objects. Each object would hold an array of points to be drawn and and array of accompanying box2d objects. Any help on this is much appreciated as I cannot see a 'clean' solution.

    Read the article

  • DevDays ‘00 The Netherlands day #2

    - by erwin21
    Day 2 of DevDays 2010 and again 5 interesting sessions at the World Forum in The Hague. The first session of the today in the big world forum theater was from Scott Hanselman, he gives a lap around .NET 4.0. In his way of presenting he talked about all kind of new features of .NET 4.0 like MEF, threading, parallel processing, changes and additions to the CLR and DLR, WPF and all new language features of .NET 4.0. After a small break it was ready for session 2 from Scott Allen about Tips, Tricks and Optimizations of LINQ. He talked about lazy and deferred executions, the difference between IQueryable and IEnumerable and the two flavors of LINQ syntax. The lunch was again very good prepared and delicious, but after that it was time for session 3 Web Vulnerabilities and Exploits from Alex Thissen. This was no normal session but more like a workshop, we decided what kind of subjects we discussed, the subjects where OWASP, XSS and other injections, validation, encoding. He gave some handy tips and tricks how to prevent such attacks. Session 4 was about the new features of C# 4.0 from Alex van Beek. He talked about Optional- en Named Parameters, Generic Co- en Contra Variance, Dynamic keyword and COM Interop features. He showed how to use them but also when not to use them. The last session of today and also the last session of DevDays 2010 was about WCF Best Practices from Gerben van Loon. He talked about 7 best practices that you must know when you are going to use WCF. With some quick demos he showed the problem and the solution for some common issues. It where two interesting days and next year i sure will be attending again.

    Read the article

  • Why did Apple remove Python support in Mavericks, aka Mac OS X 10.9?

    - by alex gray
    Apple has removed Python support (at least on the Developer level) in 10.9. Python IS still on the machine in /System/Library/Frameworks/Python.framework... but trying to link to Python using the 10.9 SDK fails. /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks does not have Python. I'm not a Pythonista, but find it interesting that Apple has made this change. I don't understand why this is done and I'm a bit annoyed that I have to remove Python from my compilation units in order to compile with 10.9 SDK. Is this a statement by Apple, along the lines of "People aren't using Python very much anymore so we're going to phase out support"? Or was something else driving the change?

    Read the article

  • Libraries for eclipse with CCSv5 from Texas Instruments

    - by Alex
    My system is Ubuntu 11.10 64bit and i have to run a 32bit version of eclipse to use the TI plugins for CCSv5 but it doesn't work. I tried to run eclipse in a 62bit java environment but it doesn't even start. Now I got "java version "1.6.0_30"" from Sun in 32bit and now eclipse starts but can't use the TI plug ins and I get the following errors in bash: /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so: falsche ELF-Klasse: ELFCLASS64 /usr/lib/gio/modules/libgvfsdbus.so: falsche ELF-Klasse: ELFCLASS64 Failed to load module: /usr/lib/gio/modules/libgvfsdbus.so and this in a popup-window when Itry to use the plugin: The selected wizard could not be started. Plug-in com.ti.ccstudio.project.ui was unable to load class com.ti.ccstudio.project.ui.internal.wizards.importexport.temp.ExternalProjectImportWizard. An error occurred while automatically activating bundle com.ti.ccstudio.project.ui (352). T tested the libraries with file: /usr/lib/gio/modules/libgvfsdbus.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, stripped /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, stripped The ia32-libs are installed.

    Read the article

  • Optimized Integration between Oracle Data Integrator and Oracle GoldenGate

    - by Alex Kotopoulis
    The Journalizing Knowledge Module for Oracle GoldenGate's CDC mechanism has just been released as part of the ODI 10.1.3.6_02 patch, and a very useful post from Mark Rittman gives detailed instructions about how to set it up in a sample environment. This integration combines the best of two worlds, the non-invasive, highly performant data replication from Oracle GoldenGate with the innovative and scalable EL-T concept from ODI to perform complex loads into real-time data warehouses. Please also check out the new datasheet about this integration.

    Read the article

  • Exporting SWF With Transparent Background For Scaleform/UDK

    - by Alex Shepard
    After looking all over in the UDN and forums I have yet to find a solution for this: I am currently using Flash CS3 and Actionscript 2.0 to build my scaleform menus and I can use them in the UDK. For various reasons I can't use the handy plugin Autodesk supplies to enable this export so I publish my flash documents to swf the old fassioned way and manually use the gfxexport.exe tool to get my .gfx file. I can then import into the UDK the normal way. My problem is that the flash movies that I import will not alpha blend even if the material is set to blend in the alpha channel of the target render texture. My project images are set up to export properly. My classpath for Actionscript 2.0 is set to the correct location. My HTML publish settings have window mode set to Transparent Windowless. Is it possible to export without the scaleform flash extension and still get the desired effects and if so how might I do so? Am I merely missing something from my project setup?

    Read the article

  • Would using AJAX only "Add to Cart" buttons be wise?

    - by Alex Erwin
    I want to AJAX enable all of my Add To Cart buttons because search engine bots are indexing these and not paying attention to my robots file or site map. I just don't want to loose potential customers. I have seen a number of top sites using heavily JavaScript support content, including Amazon, is it OK to follow the trend? The rest of my site progressively degrades, but I would really like to implement this because of the benefits to the customer (instant satisfaction), my infrastructure (constant page rebuilds), and allowing me to use SEO tools to optimize without the tool picking up thousands of "Add to Cart" widgets in my catalog. Thanks

    Read the article

  • Renault under threat from industrial espionage, intellectual property the target

    - by Simon Thorpe
    Last year we saw news of both General Motors and Ford losing a significant amount of valuable information to competitors overseas. Within weeks of the turn of 2011 we see the European car manufacturer, Renault, also suffering. In a recent news report, French Industry Minister Eric Besson warned the country was facing "economic war" and referenced a serious case of espionage which concerns information pertaining to the development of electric cars. Renault senior vice president Christian Husson told the AFP news agency that the people concerned were in a "particularly strategic position" in the company. An investigation had uncovered a "body of evidence which shows that the actions of these three colleagues were contrary to the ethics of Renault and knowingly and deliberately placed at risk the company's assets", Mr Husson said. A source told Reuters on Wednesday the company is worried its flagship electric vehicle program, in which Renault with its partner Nissan is investing 4 billion euros ($5.3 billion), might be threatened. This casts a shadow over the estimated losses of Ford ($50 million) and General Motors ($40 million). One executive in the corporate intelligence-gathering industry, who spoke on condition of anonymity, said: "It's really difficult to say it's a case of corporate espionage ... It can be carelessness." He cited a hypothetical example of an enthusiastic employee giving away too much information about his job on an online forum. While information has always been passed and leaked, inadvertently or on purpose, the rise of the Internet and social media means corporate spies or careless employees are now more likely to be found out, he added. We are seeing more and more examples of where companies like these need to invest in technologies such as Oracle IRM to ensure such important information can be kept under control. It isn't just the recent release of information into the public domain via the Wikileaks website that is of concern, but also the increasing threats of industrial espionage in cases such as these. Information rights management doesn't totally remove the threat, but abilities to control documents no matter where they exist certainly increases the capabilities significantly. Every single time someone opens a sealed document the IRM system audits the activity. This makes identifying a potential source for a leak much easier when you have an absolute record of every person who's had access to the documents. Oracle IRM can also help with accidental or careless loss. Often people use very sensitive information all the time and forget the importance of handling it correctly. With the ability to protect the information from screen shots and prevent people copy and pasting document information into social networks and other, unsecured documents, Oracle IRM brings a totally new level of information security that would have a significant impact on reducing the risk these organizations face of losing their most valuable information.

    Read the article

  • Design pattern for procedural terrain assets

    - by Alex
    I'm developing a procedural terrain class at the moment and am stuck on the correct design pattern. The terrain is 2D and is constructed from a series of (x,y) points. I currently have a method that just randomly adds points to an array of points to generate a random spread of points. However I need a more elaborate system for generating the terrain. The terrain will be built form a series of re-accuring terrain structures eg. a pit, jump, hill etc. Each structure will have some randomness assigned to it, each height of hill will be random, pit size will be random etc. Each terrain structure will have: A property detailing the number of points making up that structure A method for generating the points (not absolutely necessary) My current thinking is to have a class for each terrain structure, create a fixed amount of terrain elements ahead of the player, loop over these and add the corresponding points to the game. What is the best way to create these procedural terrain structures when they are ultimately just a set of functions for generating terrain elements? Is a class for each terrain element excessive? I'm developing the game for iphone so any objective-c related answers would be welcome.

    Read the article

  • Unable to set my own icon for launcher item in 12.04

    - by Alex K
    I use the Faenza icon collection in Ubuntu 12.04 Unity with no issues. I decided to change my Gimp launcher icon, so I made my own (gimp-ak.png) and added it, and its appropriately sized derivatives, to the Faenza icon folders: /usr/share/icons/Faenza/apps/16/gimp-ak.png /usr/share/icons/Faenza/apps/22/gimp-ak.png /usr/share/icons/Faenza/apps/24/gimp-ak.png /usr/share/icons/Faenza/apps/32/gimp-ak.png /usr/share/icons/Faenza/apps/48/gimp-ak.png /usr/share/icons/Faenza/apps/64/gimp-ak.png /usr/share/icons/Faenza/apps/96/gimp-ak.png /usr/share/icons/Faenza/apps/scalable/gimp-ak.svg I then updated the Icon field in /usr/share/applications/gimp.desktop from "gimp" to "gimp-ak": [Desktop Entry] Version=1.0 Type=Application Name=GIMP Image Editor GenericName=Image Editor Comment=Create images and edit photographs Exec=gimp-2.6 %U TryExec=gimp-2.6 Icon=gimp-ak Terminal=false Categories=Graphics;2DGraphics;RasterGraphics;GTK; X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=GIMP X-GNOME-Bugzilla-Component=General X-GNOME-Bugzilla-Version=2.6.12 X-GNOME-Bugzilla-OtherBinaries=gimp-2.6 StartupNotify=true MimeType=application/postscript;application/pdf;image/bmp;image/g3fax;image/gif;image/x-$ X-Ubuntu-Gettext-Domain=gimp20 After logging off (and even restarting), my custom icon does not show up - Gimp has the default gear icon: Setting the Icon field in gimp.desktop to any other icon in the Faenza collection works fine. What do I need to do to get my custom icon to show up properly?

    Read the article

  • What do you call an obfuscator that isn't an obfuscator?

    - by Alex.Davies
    SmartAssembly, formerly {smartassembly}, version 5 is now available as an Early Access Build. You can get it here: http://www.red-gate.com/MessageBoard/viewforum.php?f=116 We're having second thoughts about the name change though. It isn't that we like the curly brackets, far from it. The trouble is that the first rule of product naming is to name a product by what it does. SmartAssembly may make an assembly smarter, but that's not something people really google for. The trouble is, I can't think of a better name for it. That's because SmartAssembly really does two completely separate things: Obfuscates Sets up your assembly for the awesome exception reports which get sent to you whenever your application crashes. You may have been (un?)lucky enough to see one in reflector if you use it. This is what those exception reports look like when they arrive back with the developer: Look at all those local variables! If you ask me, this is much cooler than the obfuscation. So obviously we don't want to call it just "Red Gate Obfuscator" or something, because it doesn't do justice to the exception reporting. What would you call it?

    Read the article

  • Steam freezes at login screen

    - by Snail284069
    I have just installed Steam on Xubuntu, and after it finished installing it went to the login screen, but the screen is frozen, and I cannot press the buttons. When running Steam though the terminal it says: alex@Craptop:~$ steam Running Steam on ubuntu 14.04 32-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Installing breakpad exception handler for appid(steam)/version(1400690891_client) [0522/174755:WARNING:proxy_service.cc(958)] PAC support disabled because there is no system implementation Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig warning: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 78: saw unknown, expected number Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) [HTTP Remote Control] HTTP server listening on port 35849. Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Process 2764 created /alex-ValveIPCSharedObjects5 Installing breakpad exception handler for appid(steam)/version(1400690891_client) Generating new string page texture 12: 48x256, total string texture memory is 49.15 KB Generating new string page texture 13: 256x256, total string texture memory is 311.30 KB Generating new string page texture 14: 128x256, total string texture memory is 442.37 KB Generating new string page texture 15: 384x256, total string texture memory is 835.58 KB and then the terminal gets stuck too, letting me type into it but not doing anything. I tried reinstalling and restarting the computer, but it still keeps happening.

    Read the article

  • Why not to use StackTrace to find what method called you

    - by Alex.Davies
    Our obfuscator, SmartAssembly, does some pretty crazy reflection. It's an obfuscator, it's sort of its job to do things in the most awkward way possible. But sometimes, you can go too far. One such time is this little gem from the strings encoding feature: StackTrace stackTrace = new StackTrace(); StackFrame frame = stackTrace.GetFrame(1); Type ownerType = frame.GetMethod().DeclaringType; It's designed to find the type where the calling method is defined. A user found that strings encoding occasionally broke on x64 systems. Very strange. After some debugging (thank god for Reflector Pro, it would be impossible to debug processed assemblies without it) I found that the ownerType I got back was wrong. The reason is that the x64 JIT does tail call optimisation. This saves space on the stack, and speeds things up, by throwing away a method's stack frame if the last thing that it calls is the only thing returned. When this happens, the call to StackTrace faithfully tells you that the calling method is the one that called the one we really wanted. So using StackTrace isn't safe for anything other than debugging, and it will make your code fail in unpredictable ways. Don't use it!

    Read the article

  • Refactoring an immediate drawing function into VBO, access violation error

    - by Alex
    I have a MD2 model loader, I am trying to substitute its immediate drawing function with a Vertex Buffer Object one.... I am getting a really annoying access violation reading error and I can't figure out why, but mostly I'd like an opinion as to whether this looks correct (never used VBOs before). This is the original function (that compiles ok) which calculates the keyframe and draws at the same time: glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); What I'd like to do is to calculate all positions before hand, store them in a Vertex array and then draw them. This is what I am trying to replace it with (in the exact same part of the program) int vCount = 0; for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } indices[vCount] = normal[0]; vCount++; indices[vCount] = normal[1]; vCount++; indices[vCount] = normal[2]; vCount++; MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; indices[vCount] = texCoord->texCoordX; vCount++; indices[vCount] = texCoord->texCoordY; vCount++; indices[vCount] = pos[0]; vCount++; indices[vCount] = pos[1]; vCount++; indices[vCount] = pos[2]; vCount++; } } totalVertices = vCount; glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glNormalPointer(GL_FLOAT, 0, indices); glTexCoordPointer(2, GL_FLOAT, sizeof(float)*3, indices); glVertexPointer(3, GL_FLOAT, sizeof(float)*5, indices); glDrawElements(GL_TRIANGLES, totalVertices, GL_UNSIGNED_BYTE, indices); glDisableClientState(GL_VERTEX_ARRAY); // disable vertex arrays glEnableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); First of all, does it look right? Second, I get access violation error "Unhandled exception at 0x01455626 in Graphics_template_1.exe: 0xC0000005: Access violation reading location 0xed5243c0" pointing at line 7 Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; where the two Vs seems to have no value in the debugger.... Till this point the function behaves in exactly the same way as the one above, I don't understand why this happens? Thanks for any help you may be able to provide!

    Read the article

  • problem with Webmaster Google Sitemap

    - by Alex
    I have a wp mu 3.6.1 with domain mapping (0.5.4.3) with w3tc (0.9.3) and Google XML Sitemaps (4.0 BETA). I have 4 different sitemaps. sub-1.com/sitemap.xml sub-2.com/sitemap.xml sub-3.com/sitemap.xml sub-4.com/sitemap.xml on google webmaster i got 59 errors & 14 warnings. Sitemap errorsErrors: We encountered an error while trying to access your Sitemap. Please ensure your Sitemap follows our guidelines and can be accessed at the location you provided and then resubmit. General HTTP error: 404 not found Sitemap: sub-2.com/sitemap-pt-post-2011-02.xml etc (but when i click on my sitemap links they work fine) Sitemap errorsWarnings: URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. Sitemap: sub-2.com/sitemap-misc.xml HTTP Error: 404 URL: /sitemap.html (but when i click on my sitemap links they work fine) Sitemap errorsIndex errors URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. HTTP Error: 404 URL: /sitemap-pt-post-2010-09.xml (but when i click on my sitemap links they work fine) Web pages 3,276 Submitted 3,247 Indexed what do I have to put on network adminperformance(w3tc)page cachecache preloadSitemap URL: ? i have add "/sitemap.xml" my robots.txt: http://pastebin.com/3K2U0mQa my .htaccess: http://pastebin.com/efJJ6zwy How can I make it work right?

    Read the article

  • Why lock statements don't scale

    - by Alex.Davies
    We are going to have to stop using lock statements one day. Just like we had to stop using goto statements. The problem is similar, they're pretty easy to follow in small programs, but code with locks isn't composable. That means that small pieces of program that work in isolation can't necessarily be put together and work together. Of course actors scale fine :) Why lock statements don't scale as software gets bigger Deadlocks. You have a program with lots of threads picking up lots of locks. You already know that if two of your threads both try to pick up a lock that the other already has, they will deadlock. Your program will come to a grinding halt, and there will be fire and brimstone. "Easy!" you say, "Just make sure all the threads pick up the locks in the same order." Yes, that works. But you've broken composability. Now, to add a new lock to your code, you have to consider all the other locks already in your code and check that they are taken in the right order. Algorithm buffs will have noticed this approach means it takes quadratic time to write a program. That's bad. Why lock statements don't scale as hardware gets bigger Memory bus contention There's another headache, one that most programmers don't usually need to think about, but is going to bite us in a big way in a few years. Locking needs exclusive use of the entire system's memory bus while taking out the lock. That's not too bad for a single or dual-core system, but already for quad-core systems it's a pretty large overhead. Have a look at this blog about the .NET 4 ThreadPool for some numbers and a weird analogy (see the author's comment). Not too bad yet, but I'm scared my 1000 core machine of the future is going to go slower than my machine today! I don't know the answer to this problem yet. Maybe some kind of per-core work queue system with hierarchical work stealing. Definitely hardware support. But what I do know is that using locks specifically prevents any solution to this. We should be abstracting our code away from the details of locks as soon as possible, so we can swap in whatever solution arrives when it does. NAct uses locks at the moment. But my advice is that you code using actors (which do scale well as software gets bigger). And when there's a better way of implementing actors that'll scale well as hardware gets bigger, only NAct needs to work out how to use it, and your program will go fast on it's own.

    Read the article

  • XNA Deferred Shading, Replace BasicEffect

    - by Alex
    I have implemented deferred shading in my XNA 4.0 project, meaning that I need all objects to start out with the same shader "RenderGBuffer.fx". How can I use a custom Content Processor to: Not load any textures by default (I want to manually do this) Use "RenderGBuffer.fx" as the default shader instead of BasicEffect Below is the progress so far public class DeferredModelProcessor : ModelProcessor { EffectMaterialContent deferredShader; public DeferredModelProcessor() { } protected override MaterialContent ConvertMaterial(MaterialContent material, ContentProcessorContext context) { deferredShader = new EffectMaterialContent(); deferredShader.Effect = new ExternalReference<EffectContent>("DeferredShading/RenderGBuffer.fx"); return context.Convert<MaterialContent, MaterialContent>(deferredShader, typeof(MaterialProcessor).Name); } }

    Read the article

  • How are events in games handled?

    - by Alex
    In may games that I have played, I have seen events being triggered, such as when you walk into a certain land area while holding a specific object, it will trigger a special creature to spawn. I was wondering, how do games deal with events such as this? Not in a specific game, but in general among games. The first thought I had was that each place has a hard-coded set of events that it will call when something happens there. However, that would be too inefficient to maintain, as when something new is added, that would require modification of every part of the game that would potentially cause the event to be called. Next up, I had the idea of maybe how GUI programming works. In all of the GUI programming I've done, you create a component and a callback function, or a listener. Then, when the user interacts when the button, the callback function is called, allowing you to do something with it. So, I was thinking that in terms of a game, when a land area gets loaded the game loops over a list of all events, creating instances of them and calling public methods to bind them to the current scene. The events themselves then handle what scene it is, and if it is a scene that pertains to the event, will call the public method of the scene to bind the event to an action. Then, when the action takes place, the scene would call all events that are bound to that action. However, I'm sure that's not how games would operate either, as that would require a lot of creating of events all the time. So how to video games handle events, are either of those methods correct, or is it something completely different?

    Read the article

  • In MATLAB, how can 'preallocating' cell arrays improve performance?

    - by Alex McMurray
    I was reading this article on MathWorks about improving MATLAB performance and you will notice that one of the first suggestions is to preallocate arrays, which makes sense. But it also says that preallocating Cell arrays (that is arrays which may contain different, unknown datatypes) will improve performance. But how will doing so improve performance because the datatypes are unknown so it doesn't know how much contiguous memory it will require even if it knows the shape of the cell array, and therefore it can't preallocate the memory surely? So how does this result in any improvement in performance? I apologise if this question is better suited for StackOverflow than Programmers but it isn't asking about a specific problem so I thought it fit better here, please let me know if I am mistaken though. Any explanation would be greatly appreciated :)

    Read the article

  • Constraint-based expert systems design and development [on hold]

    - by Alex B.
    I would appreciate some recommendations & resources on design and development of expert systems, in particular, knowledge-based & constraint-based (not recommendation) systems. Ideally, your answers should consider the perspective (context) of using a SaaS business model and open source rules engine. How would you advise to address performance, scalability and other architectural criteria? Any other considerations on undertaking such project will be appreciated. Thanks much in advance!

    Read the article

  • Two Wifi Icons in Panel [Solved]

    - by Alex
    I have the exact problem in 13.10 as this user Two Wifi indicators in panel. Here are some screenshots: Here are some screenshots from another user: http://ubuntuforums.org/showthread.php?t=2183020&p=12825563 ifconfig and iwconfig outputs $ ifconfig lo Link encap:Local Loopback inet addr:XXXXXX Mask:XXXXXXX inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2243 errors:0 dropped:0 overruns:0 frame:0 TX packets:2243 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:209889 (209.8 KB) TX bytes:209889 (209.8 KB) wlan0 Link encap:Ethernet HWaddr XXXXXXXXX inet addr:XXXXXX Bcast:XXXXXXXX Mask:XXXXXXX inet6 addr: XXXXXXX Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5925 errors:0 dropped:0 overruns:0 frame:0 TX packets:3361 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2951818 (2.9 MB) TX bytes:630579 (630.5 KB) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"XXXXX" Mode:Managed Frequency:2.437 GHz Access Point: XXXXXXXX Bit Rate=72.2 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=49/70 Signal level=-61 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:153 Invalid misc:472 Missed beacon:0

    Read the article

  • C# async and actors

    - by Alex.Davies
    If you read my last post about async, you might be wondering what drove me to write such odd code in the first place. The short answer is that .NET Demon is written using NAct Actors. Actors are an old idea, which I believe deserve a renaissance under C# 5. The idea is to isolate each stateful object so that only one thread has access to its state at any point in time. That much should be familiar, it's equivalent to traditional lock-based synchronization. The different part is that actors pass "messages" to each other rather than calling a method and waiting for it to return. By doing that, each thread can only ever be holding one lock. This completely eliminates deadlocks, my least favourite concurrency problem. Most people who use actors take this quite literally, and there are plenty of frameworks which help you to create message classes and loops which can receive the messages, inspect what type of message they are, and process them accordingly. But I write C# for a reason. Do I really have to choose between using actors and everything I love about object orientation in C#? Type safety Interfaces Inheritance Generics As it turns out, no. You don't need to choose between messages and method calls. A method call makes a perfectly good message, as long as you don't wait for it to return. This is where asynchonous methods come in. I have used NAct for a while to wrap my objects in a proxy layer. As long as I followed the rule that methods must always return void, NAct queued up the call for later, and immediately released my thread. When I needed to get information out of other actors, I could use EventHandlers and callbacks (continuation passing style, for any CS geeks reading), and NAct would call me back in my isolated thread without blocking the actor that raised the event. Using callbacks looks horrible though. To remind you: m_BuildControl.FilterEnabledForBuilding(    projects,    enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(        enabledProjects,             newDirtyProjects =             {                 ....... Which is why I'm really happy that NAct now supports async methods. Now, methods are allowed to return Task rather than just void. I can await those methods, and C# 5 will turn the rest of my method into a continuation for me. NAct will run the other method in the other actor's context, but will make sure that when my method resumes, we're back in my context. Neither actor was ever blocked waiting for the other one. Apart from when they were actually busy doing something, they were responsive to concurrent messages from other sources. To be fair, you could use async methods with lock statements to achieve exactly the same thing, but it's ugly. Here's a realistic example of an object that has a queue of data that gets passed to another object to be processed: class QueueProcessor {    private readonly ItemProcessor m_ItemProcessor = ...     private readonly object m_Sync = new object();    private Queue<object> m_DataQueue = ...    private List<object> m_Results = ...     public async Task ProcessOne() {         object data = null;         lock (m_Sync)         {             data = m_DataQueue.Dequeue();         }         var processedData = await m_ItemProcessor.ProcessData(data); lock (m_Sync)         {             m_Results.Add(processedData);         }     } } We needed to write two lock blocks, one to get the data to process, one to store the result. The worrying part is how easily we could have forgotten one of the locks. Compare that to the version using NAct: class QueueProcessorActor : IActor { private readonly ItemProcessor m_ItemProcessor = ... private Queue<object> m_DataQueue = ... private List<object> m_Results = ... public async Task ProcessOne()     {         // We are an actor, it's always thread-safe to access our private fields         var data = m_DataQueue.Dequeue();         var processedData = await m_ItemProcessor.ProcessData(data);         m_Results.Add(processedData);     } } You don't have to explicitly lock anywhere, NAct ensures that your code will only ever run on one thread, because it's an actor. Either way, async is definitely better than traditional synchronous code. Here's a diagram of what a typical synchronous implementation might do: The left side shows what is running on the thread that has the lock required to access the QueueProcessor's data. The red section is where that lock is held, but doesn't need to be. Contrast that with the async version we wrote above: Here, the lock is released in the middle. The QueueProcessor is free to do something else. Most importantly, even if the ItemProcessor sometimes calls the QueueProcessor, they can never deadlock waiting for each other. So I thoroughly recommend you use async for all code that has to wait a while for things. And if you find yourself writing lots of lock statements, think about using actors as well. Using actors and async together really takes the misery out of concurrent programming.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >