Search Results

Search found 7375 results on 295 pages for 'parameter'.

Page 215/295 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • How to set the monitor to its native resolution when xrandr approach isn't working?

    - by Krishna Kant Sharma
    I am trying to setup my Samsung syncmaster B2030 monitor in ubuntu 12.04. It's native resolution is 1600x900 which I am not getting in ubuntu and which I am trying to get. I tried using xrandr approach provided in these urls: 1) http://www.ubuntugeek.com/how-change-display-resolution-settings-using-xrandr.html 2) How to set the monitor to its native resolution which is not listed in the resolutions list? S1) I used cvt 1600 900 60 to get the modeline. Output was: # 1600x900 59.95 Hz (CVT 1.44M9) hsync: 55.99 kHz; pclk: 118.25 MHz Modeline "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync S2) I then used xrandr and output was: Screen 0: minimum 8 x 8, current 1152 x 864, maximum 8192 x 8192 DVI-I-0 disconnected (normal left inverted right x axis y axis) VGA-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) which gave me "VGA-0". S3) Then I used xrandr --newmode "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync But instead of adding the modeline it just threw an error: X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 153 (RANDR) Minor opcode of failed request: 16 (RRCreateMode) Serial number of failed request: 29 Current serial number in output stream: 29 My system details: 1) ubuntu 12.04 LTS 2) Graphic card: GeForce 9400 GT/PCIe/SSE2 (driver is successfully installed. I am checking it in System Settings Details. And it's showing that driver is installed and its "GeForce 9400 GT/PCIe/SSE2") 3) Monitor: Samsung syncmaster B2030 4) Resolutions I am getting: 800x600 1024x768 1152x864 (I am currently using this one) 1360x768 (this one isn't working properly) Does anyone know what I can do? Thanks in advance. UPDATE (1): Today I tried it again. And adding a modeline (using --newmode) worked. But when I used --addmode by: xrandr --addmode VGA-0 1600x900_60.00 It gave this error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 153 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 29 Current serial number in output stream: 30

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • Obtaining positional information in the IEnumerable Select extension method

    - by Kyle Burns
    This blog entry is intended to provide a narrow and brief look into a way to use the Select extension method that I had until recently overlooked. Every developer who is using IEnumerable extension methods to work with data has been exposed to the Select extension method, because it is a pretty critical piece of almost every query over a collection of objects.  The method is defined on type IEnumerable and takes as its argument a function that accepts an item from the collection and returns an object which will be an item within the returned collection.  This allows you to perform transformations on the source collection.  A somewhat contrived example would be the following code that transforms a collection of strings into a collection of anonymous objects: 1: var media = new[] {"book", "cd", "tape"}; 2: var transformed = media.Select( item => 3: { 4: Media = item 5: } ); This code transforms the array of strings into a collection of objects which each have a string property called Media. If every developer using the LINQ extension methods already knows this, why am I blogging about it?  I’m blogging about it because the method has another overload that I hadn’t seen before I needed it a few weeks back and I thought I would share a little about it with whoever happens upon my blog.  In the other overload, the function defined in the first overload as: 1: Func<TSource, TResult> is instead defined as: 1: Func<TSource, int, TResult>   The additional parameter is an integer representing the current element’s position in the enumerable sequence.  I used this information in what I thought was a pretty cool way to compare collections and I’ll probably blog about that sometime in the near future, but for now we’ll continue with the contrived example I’ve already started to keep things simple and show how this works.  The following code sample shows how the positional information could be used in an alternating color scenario.  I’m using a foreach loop because IEnumerable doesn’t have a ForEach extension, but many libraries do add the ForEach extension to IEnumerable so you can update the code if you’re using one of these libraries or have created your own. 1: var media = new[] {"book", "cd", "tape"}; 2: foreach (var result in media.Select( 3: (item, index) => 4: new { Item = item, Index = index })) 5: { 6: Console.ForegroundColor = result.Index % 2 == 0 7: ? ConsoleColor.Blue : ConsoleColor.Yellow; 8: Console.WriteLine(result.Item); 9: }

    Read the article

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

  • EJB Named Criteria - Apply bind variable in Backingbean

    - by Deepak Siddappa
    EJB Named criteria are predefined and reusable where-clause definitions that are dynamically applied to a ViewObject query. Here we often use to filter the ViewObject SQL statement query based on Where Clause conditions.Take a scenario where we need to filter the SQL statements query based on Where Clause conditions, instead of playing with SQL statements use the EJB Named Criteria which is supported by default in ADF and set the Bind Variable parameter at run time.You can download the sample workspace from here [Runs with Oracle JDeveloper 11.1.2.0.0 (11g R2) + HR Schema] Implementation StepsCreate Java EE Web Application with entity based on Employees table, then create a session bean and data control for the session bean.Open the DataControls.dcx file and create sparse xml for as shown below. In sparse xml navigate to Named criteria tab -> Bind Variable section, create binding variable deptId. Now create a named criteria and map the query attributes to the bind variable. In the ViewController create index.jspx page, from data control palette drop employeesFindAll->Named Criteria->EmployeesCriteria->Table as ADF Read-Only Filtered Table and create the backingBean as "IndexBean".Open the index.jspx page and remove the "filterModel" binding from the table, add <af:inputText />, command button and bind them to backingBean. For command button create the actionListener as "applyEmpCriteria" and add below code to the file. public void applyEmpCriteria(ActionEvent actionEvent) { DCIteratorBinding dc = (DCIteratorBinding)evaluteEL("#{bindings.employeesFindAllIterator}"); ViewObject vo = dc.getViewObject(); vo.applyViewCriteria(vo.getViewCriteriaManager().getViewCriteria("EmployeesCriteria")); vo.ensureVariableManager().setVariableValue("deptId", this.getDeptId().getValue()); vo.executeQuery(); } /** * Programmtic evaluation of EL * * @param el EL to evalaute * @return Result of the evalutaion */ public Object evaluteEL(String el) { FacesContext fctx = FacesContext.getCurrentInstance(); ELContext elContext = fctx.getELContext(); Application app = fctx.getApplication(); ExpressionFactory expFactory = app.getExpressionFactory(); ValueExpression valExp = expFactory.createValueExpression(elContext, el, Object.class); return valExp.getValue(elContext); } Run the index.jspx page, enter departmentId value as 90 and click in ApplyEmpCriteria button. Now the bind variable for the Named criteria will be applied at runtime in the backing bean and it will re-execute ViewObject query to filter based on where clause condition.

    Read the article

  • Using a SQL Prompt snippet with template parameters

    - by SQLDev
    As part of my product management role I regularly attend trade shows and man the Red Gate booth in the vendor exhibition hall. Amongst other things this involves giving product demos to customers. Our latest demo involves SQL Source Control and SQL Test in a continuous integration environment. In order to demonstrate quite how easy it is to set up our tools from scratch we start the demo by creating an entirely new database to link to source control, using an individual database name for each conference attendee. In SQL Server Management Studio this can be done either by selecting New Database from the Object Explorer or by executing “CREATE DATABASE DemoDB_John” in a query window. We recently extended the demo to include SQL Test. This uses an open source SQL Server unit testing framework called tSQLt (www.tsqlt.org), which has a CLR object that requires EXTERNAL_ACCESS to be set as follows: ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON This isn’t hard to do, but if you’re giving demo after demo, this two-step process soon becomes tedious. This is where SQL Prompt snippets come into their own. I can create a snippet named create_demo_db for this following: CREATE DATABASE DemoDB_John GO USE DemoDB_John GO ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON Now I just have to type the first few characters of the snippet name, select the snippet from SQL Prompt’s candidate list, and execute the code. Simple! The problem is that this can only work once due to the hard-coded database name. Luckily I can leverage a nice feature in SQL Server Management Studio called Template Parameters. If I modify my snippet to be: CREATE DATABASE <DBName,, DemoDB_> GO USE <DBName,, DemoDB_> GO ALTER DATABASE <DBName,, DemoDB_> SET TRUSTWORTHY ON Once I’ve invoked the snippet, I can press Ctrl-Shift-M, which calls up the Specify Values for Template Parameters dialog, where I can type in my database name just once. Now you can click OK and run the query. Easy. Ideally I’d like for SQL Prompt to auto-invoke the Template Parameter dialog for all snippets where it detects the angled bracket syntax, but typing in the keyboard shortcut is a small price to pay for the time savings.

    Read the article

  • Shrink NTFS Windows 7 Partition with GParted

    - by user15961
    I am running a dual-boot system with Windows 7 and Ubuntu 10.10. Initially I allocated about 20GB for my Ubuntu partition; however, I quickly ran out of that space and am now looking to expand my partition. Currently my NTFS partition (450GB) has about 130GB of free space. I tried using GParted to shrink the partition but encountered the following error. I booted into windows so I could run chkdsk but the countdown freezes at 1 upon reboot. I tried multiple methods to resolve that issue but nothing seems to work. Finally I gave up, and now I just want to know what is the best way for me to force GParted to shrink the partition regardless of the errors. I don't really have anything important and I don't mind risking the data. I just don't want to wipe the entire NTFS partition because I don't have a Windows install CD and might require Windows later on for some programs. I tried using sudo ntfsresize but that spews out the same error as GParted... Any ideas? Check and repair file system (ntfs) on /dev/sda2 00:00:09 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 36944325 end: 976771119 size: 939826795 (448.14 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:09 ( ERROR ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2.0.0 (libntfs 10:0:0) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 481191318016 bytes (481192 MB) Current device size: 481191319040 bytes (481192 MB) Checking for bad sectors ... Checking filesystem consistency ... Cluster 63468 is referenced multiple times! Cluster 63469 is referenced multiple times! Cluster 63465 is referenced multiple times! Cluster 63466 is referenced multiple times! Cluster 63467 is referenced multiple times! Cluster 165621 is referenced multiple times! Cluster 165622 is referenced multiple times! Cluster 165623 is referenced multiple times! Cluster 165624 is referenced multiple times! ERROR: Filesystem check failed! ERROR: 9 clusters are referenced multiply times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired.

    Read the article

  • No GRUB Screen or recovery mode on Boot after 12.04 Upgrade

    - by Nick
    I tried the live boot CD and boot-repair, also loaded the Desktop install CD, and it looks like all partitions check out OK. However, when I try to boot Linux (the only bootable partition on the computer) I get a blank screen. Every so often the screen give me something akin to: Assuming write through cache Asking for cache data failed it appears to start booting, then hangs. Ctrl+Alt+Delete shuts down the machine The last message during boot is "STarting TiMidity++ ALSA midi emulation... [OK]" I used boot-repair to generate a boot info report. One thing looks odd to me- it reports a missing core.img on /dev/sda1. Here is the full info: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info August 2nd 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. = Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda1 and looks at sector 18406911 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/extlinux/extlinux.conf /boot/grub/core.img sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 307,339,514 307,339,452 83 Linux /dev/sda2 307,339,515 312,576,704 5,237,190 5 Extended /dev/sda5 307,339,578 312,576,704 5,237,127 82 Linux swap / Solaris Drive: sdb _______________________________________ Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 625,142,447 625,140,400 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 11b4d633-7863-40b2-a6ca-da5f82c3ad0b ext4 /dev/sda5 cb8d65f4-8cf9-4088-b804-e3dea2151033 swap /dev/sdb1 349E7C109E7BC8BE ntfs Personal1 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sdb1 /media/Personal1 fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sr0 /live/image iso9660 (ro,noatime) ...(a bunch of config file info- let me know if anyone wants to see it!) But usually I just get "Cannot Display This Video Mode", which I know means the video output is not usable by the monitor. I'm looking for a way to get into a recovery mode.I'd really like to avoid wiping the drive. Any thoughts?

    Read the article

  • Error while removing the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug. This is my grub configuration file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` RUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset video=uvesafb:mode_option=1024x768-24,mtrr=3,scroll=ywrap" video=uvesafb:mode_option=>>1024x768-24<<,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX=" vga=792 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1024x768-24 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" thank you for answering.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Unknown C# keywords: params

    - by Chris Skardon
    Often overlooked, and (some may say) unloved, is the params keyword, but it’s an awesome keyword, and one you should definitely check out. What does it do? Well, it lets you specify a single parameter that can have a variable number of arguments. You what? Best shown with an example, let’s say we write an add method: public int Add(int first, int second) { return first + second; } meh, it’s alright, does what it says on the tin, but it’s not exactly awe-inspiring… Oh noes! You need to add 3 things together??? public int Add(int first, int second, int third) { return first + second + third; } oh yes, you code master you! Overloading a-plenty! Now a fourth… Ok, this is starting to get a bit ridiculous, if only there was some way… public int Add(int first, int second, params int[] others) { return first + second + others.Sum(); } So now I can call this with any number of int arguments? – well, any number > 2..? Yes! int ret = Add(1, 2, 3); Is as valid as: int ret = Add(1, 2, 3, 4); Of course you probably won’t really need to ever do that method, so what could you use it for? How about adding strings together? What about a logging method? We all know ToString can be an expensive method to call, it might traverse every node on a class hierarchy, or may just be the name of the type… either way, we don’t really want to call it if we can avoid it, so how about a logging method like so: public void Log(LogLevel level, params object[] objs) { if(LoggingLevel < level) return; StringBuilder output = new StringBuilder(); foreach(var obj in objs) output.Append((obj == null) ? "null" : obj.ToString()); return output; } Now we only call ‘ToString’ when we want to, and because we’re passing in just objects we don’t have to call ToString if the Logging Level isn’t what we want… Of course, there are a multitude of things to use params for…

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • Why does creating dynamic bodies in JBox2D freeze my app?

    - by Amplify91
    My game hangs/freezes when I create dynamic bullet objects with Box2D and I don't know why. I am making a game where the main character can shoot bullets by the user tapping on the screen. Each touch event spawns a new FireProjectileEvent that is handled properly by an event queue. So I know my problem is not trying to create a new body while the box2d world is locked. My bullets are then created and managed by an object pool class like this: public Projectile getProjectile(){ for(int i=0;i<mProjectiles.size();i++){ if(!mProjectiles.get(i).isActive){ return mProjectiles.get(i); } } return mSpriteFactory.createProjectile(); } mSpriteFactory.createProjectile() leads to the physics component of the Projectile class creating its box2d body. I have narrowed the issue down to this method and it looks like this: public void create(World world, float x, float y, Vec2 vertices[], boolean dynamic){ BodyDef bodyDef = new BodyDef(); if(dynamic){ bodyDef.type = BodyType.DYNAMIC; }else{ bodyDef.type = BodyType.STATIC; } bodyDef.position.set(x, y); mBody = world.createBody(bodyDef); PolygonShape dynamicBox = new PolygonShape(); dynamicBox.set(vertices, vertices.length); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = dynamicBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; mBody.createFixture(fixtureDef); mBody.setFixedRotation(true); } If the dynamic parameter is set to true my game freezes before crashing, but if it is false, it will create a projectile exactly how I want it just doesn't function properly (because a projectile is not a static object). Why does my program fail when I try to create a dynamic object at runtime but not when I create a static one? I have other dynamic objects (like my main character) that work fine. Any help would be greatly appreciated. This is a screenshot of a method profile I did: Especially notable is number 8. I'm just still unsure what I'm doing wrong. Other notes: I am using JBox2D 2.1.2.2. (Upgraded from 2.1.2.1 to try to fix this problem) When the application freezes, if I hit the back button, it appears to move my game backwards by one update tick. Very strange.

    Read the article

  • [JAVA]How to make my Oracle update/insert action through JAVA faster?

    - by gunbuster363
    [JAVA]How to make my Oracle update/insert action through JAVA faster? Hi everyone, I am facing a problem in my company that is - our program's speed is not fast enough. To be more specific, we are telecommunication company and this program handle call/internet serfing transaction made by every mobile phone users in our city. Because the amount of download content made by the iphone users is just too much, our program cannot handle them fast enough. The situation is, the amount of transaction made by users are double of the transaction processed by our program. Most of the running time of the program are dominated by DB transactions. I've search through the internet and browsed some sites ( for example: http://www.javaperformancetuning.com/tips/rawtips.shtml ) talking about java performace in DB, but I cannot find a suggestion suitable for us. these advices are not applicable/already used, for instance: 1)Use prepared statements. Use parametrized SQL Already used prepared statement. Each time will use different parameter by clear parameters and set parameters. 2)Tune the SQL to minimize the data returned (e.g. not 'SELECT *'). Sure, already used. 3)Use connection pooling. We hold a single connection during the program's execution. And I doubt that pooling cannot solve the problem because our program act as 1 user, so there are no problem for concurrent access to DB. If anyone of you think pooling is good, please tell me why. Thanks. 4)Try to combine queries and batch updates. Cannot do it. Every query/insert/update is depend on the database's information. For example, we look up the DB for the client's information, if we cannot find his usage, we insert the usage into DB, otherwise we do update. 5)Close resources (Connections, Statements, ResultSets) when finished Sure. 6)Select the fastest JDBC driver. I don't know. I've search on the internet about the type of driver available and I am very confused. We use oracle.jdbc.driver.OracleDriver and we use thin instead of oci, that's all I know. In addition, our program is a two-tier way ( java <- oracle ) 7)turn off auto-commit already done that. Looking forwards to any helps, thank you very much.

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • What causes Multi-Page allocations?

    - by SQLOS Team
    Writing about changes in the Denali Memory Manager In his last post Rusi mentioned: " In previous SQL versions only the 8k allocations were limited by the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained." In SQL Server versions before Denali single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter. We'll follow up with more generic articles on the new Memory Manager structure, but in this post I want to clarify what might cause these larger allocations. So what kinds of query result in MPA activity? I was asked this question the other day after delivering an MCM webcast on Memory Manager changes in Denali. After asking around our Dev team I was connected to one of our test leads Sangeetha who had tested the plan cache, and kindly provided this example of an MPA intensive query: A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.   Exec proc_name @p1, ….@p500 Exec proc_name @p1, ….@p500 . . . Exec proc_name @p1, ….@p500 Go   Another workload would be large adhoc batches of the form: Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) … Go  In Denali all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager. - Guy Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Cheese won't start

    - by Anthony Hohenheim
    I can't start Cheese Webcam Booth. It starts loading and there is a brief moment when the window shows up but then it disappears, like it shuts itself down and it's not in system monitor. My webcam works perfectly in Skype video call. I installed and run Camorama and it gave me an error: Could not connect to video device (/dev/video0) Please check connection When I run the lsusb I get this line for my webcam: Bus 002 Device 002: ID 04f2:b210 Chicony Electronics Co., Ltd And for my graphic card, running lspci: VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) It's not a pressing matter, but it bugs my nerves, if it works on Skype, why does Cheese and other programs refuse to run. As I said, it's not a big deal but any help would be appreciated. Running Cheese in terminal: (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gdk-WARNING **: The program 'cheese' received an X Window System error. This probably reflects a bug in the program. The error was 'BadDrawable (invalid Pixmap or Window parameter)'. (Details: serial 932 error_code 9 request_code 137 minor_code 9) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)

    Read the article

  • How to send multiple MVP matrices to a vertex shader in OpenGL ES 2.0

    - by Carbon Crystal
    I'm working my way through optimizing the rendering of sprites in a 2D game using OpenGL ES and I've hit the limit of my knowledge when it comes to GLSL and vertex shaders. I have two large float buffers containing my vertex coordinates and texture coordinates (eventually this will be one buffer) for multiple sprites in order to perform a single glDrawArrays call. This works but I've hit a snag when it comes to passing the transformation matrix into the vertex shader. My shader code is: uniform mat4 u_MVPMatrix; attribute vec4 a_Position; attribute vec2 a_TexCoordinate; varying vec2 v_TexCoordinate; void main() { v_TexCoordinate = a_TexCoordinate; gl_Position = u_MVPMatrix * a_Position; } In Java (Android) I am using a FloatBuffer to store the vertex/texture data and this is provided to the shader like so: mGlEs20.glVertexAttribPointer(mVertexHandle, Globals.GL_POSITION_VERTEX_COUNT, GLES20.GL_FLOAT, false, 0, mVertexCoordinates); mGlEs20.glVertexAttribPointer(mTextureCoordinateHandle, Globals.GL_TEXTURE_VERTEX_COUNT, GLES20.GL_FLOAT, false, 0, mTextureCoordinates); (The Globals.GL_POSITION_VERTEX_COUNT etc are just integers with the value of 2 right now) And I'm passing the MVP (Model/View/Projection) matrix buffer like this: GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mModelCoordinates); (mModelCoordinates is a FloatBuffer containing 16-float sequences representing the MVP matrix for each sprite) This renders my scene but all the sprites share the same transformation, so it's obviously only picking the first 16 elements from the buffer which makes sense since I am passing in "1" as the second parameter. The documentation for this method says: "This should be 1 if the targeted uniform variable is not an array of matrices, and 1 or more if it is an array of matrices." So I tried modifying the shader with a fixed size array large enough to accomodate most of my scenarios: uniform mat4 u_MVPMatrix[1000]; But this lead to an error in the shader: cannot convert from 'uniform array of 4X4 matrix of float' to 'Position 4-component vector of float' This just seems wrong anyway as it's not clear to me how the shader would know when to transition to the next matrix anyway. Anyone have an idea how I can get my shader to pick up a different MVP matrix (i.e. the NEXT 16 floats) from my MVP buffer for every 4 vertices it encounters? (I am using GL_TRIANGLE_STRIP so each sprite has 4 vertices). Thanks!

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >