Search Results

Search found 6699 results on 268 pages for 'dual layer'.

Page 76/268 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Can a pc with a 1.2 ghz dual core run VS 2013? [on hold]

    - by moo2
    Edit: This question is about a tool used for programming, Microsoft Visual Studio 2013. I already read the minimum requirements for VS 2013 before I posted or I wouldn't have asked the question. I searched on the web for an answer for a few hours before posting. Why should I go out and spend $400 on a laptop if I can spend less than $100 and use it to learn how to program? In the context of the question it is clear I was not asking for advice on what laptop to buy and it was also clear I was not asking for someone to walk me through the system requirements of VS 2013. This is a vast community and I was wondering if someone in this vast community would have experience in dealing with my question. In other words has anyone ever tried anything like it? If I had an old computer sitting around I would have tested it. At the moment all I have is a friend's Chromebook I'm borrowing. End Edit. I know the requirements for Microsoft Visual Studio 2013 state 1.6 ghz processor. I'm looking at getting a laptop for school and I found one that is old but cheap and would work for college. The Dell D430 I'm looking at getting has a 1.2ghz Core 2 Duo CPU, 80GB HD, 2GB RAM, and Windows 7 Home Premium. It's refurbished. I can get it for less than most phone's cost and from a Microsoft Authorized Refurbisher. I know it's not a skyrim rig but it's lightweight and can handle being a college laptop and doing word processing. Would Visual Studio 2013 run at all? If it is slower I'm not concerned. I just want to know if it would work and compile my assignments and run them? I'd be using this laptop for doing assignments and learning programming languages not for coding the next social media sensation.

    Read the article

  • Mimic CALayer shadow properties found in iPhone OS 3.2 for OS 3.1

    - by niblha
    The CALayer shadow properties like shadowOffset, shadowRadius, shadowColor are not available in iPhone OS versions below 3.2 and I'm wondering how I could mimic that functionality for use with 3.1 and below. I want to use this to be able to add drop shadows to UIViews in a clean way so that the shadows are drawn at layer level somehow, and not by drawing it in a view's -(void)drawRect:(CGRect)rect method which requires to shrink the actual views frame to accomodate for the shadow. (This shrinking approach have been proposed in the other UIView drop shadow related questions I found here on SO). I was thinking a layered approach would be cleaner. For example I tried creating subclassing CALayer to which I added a separate shadow layer as a sublayer, but then that would be drawn on top of whatever was draw in the drawRect: method of the UIView that had the main layer as backing layer. I've also tried implementing the subclass CALayer's drawInContext: something like this, - (void)drawInContext:(CGContextRef)ctx { // code to draw shadow for a frame the size of the layer's frame [super drawInContext:ctx]; } But then the shadow is still clipped to the current clipping bounding box of the context, which seems to be the layers own frame. I also had some idea of redirecting the drawing of the main layer to a sublayer, which would be placed above another sublayer which had the shadow drawn onto it. Then I would probably get rid of the clipping and the shadow would be farthest away. But I couldn't really wrap my head around how I would do that, and it doesn't really feel like a clean approach. Any ideas on how to go about this? Just to make clear how my UIView drop shadow related question is different from the other ones I found here on SO; I do not want to shrink the actual drawing frame of a UIView to accomodate for a shadow. I want it to somehow be on a separate layer in the background, whithout beeing clipped.

    Read the article

  • How to refactor use of the general Exception?

    - by Colin
    Our code catches the general exception everywhere. Usually it writes the error to a log table in the database and shows a MessageBox to the user to say that the operation requested failed. If there is database interaction, the transaction is rolled back. I have introduced a business logic layer and a data access layer to unravel some of the logic. In the data access layer, I have chosen not to catch anything and I also throw ArgumentNullExceptions and ArgumentOutOfRangeExceptions so that the message passed up the stack does not come straight from the database. In the business logic layer I put a try catch. In the catch I rollback the transaction, do the logging and rethrow. In the presentation layer there is another try catch that displays a MessageBox. I am now thinking about catching a DataException and an ArgumentException instead of an Exception where I know the code only accesses a database. Where the code accesses a web service, then I thought I would create my own "WebServiceException", which would be created in the data access layer whenever an HttpException, WebException or SoapException is thrown. So now, generally I will be catching 2 or 3 exceptions where currently I catch just the general Exception, and I think that seems OK to me. Does anyone wrap exceptions up again to carry the message up to the presentation layer? I think I should probably add a try catch to Main() that catches Exception, attempts to log it, displays an "Application has encountered an error" message and exits the application. So, my question is, does anyone see any holes in my plan? Are there any obvious exceptions that I should be catching or do these ones pretty much cover it (other than file access - I think there is only 1 place where we read-write to a config file).

    Read the article

  • MS Surface Tag Visualizer steals contact events

    - by Isak Savo
    I'm struggling with the TagVisualizer control on an MS Surface project. In theory the control seems great, allowing you to respond to input from real world physical objects The problem is that the control will cover the entire screen (since I want to capture tags on the entire screen) and as such, no other controls in my app will receive the touch events. (Unless, they are direct ascendants in the visual tree). In my app, I want to have a "layer" type of a approach, where each layer can respond to (contact) input: Window `- Grid `- LayersPanel `- TagVisualizer `- Layer 1 `- Layer 2 `- Layer 3 `- Layer 4 Now it doesn't matter where I put the tag visualizer, it's always going to steal contact events from all or some of the other layers. (due to the nature of RoutedEvents) To me, it seems like the control is completely useless in practice as it will always interfere with your application's other controls. What am I missing here? So my questions are: Any suggestions on how to work around this? Has anyone used TagVisualizers in a similar scenario? If so, how did you solve this? By the way, the layers all work fine, since they will only steal events that are directly on top of their sub elements (the rest of the layer is invisible to hit testing)

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • Where do I attach the StoreKit delegate and observer in a Cocos2d App?

    - by Jeff B
    I have figured out how all of the StoreKit stuff works and have actually tested working code... however, I have a problem. I made my "store" layer/scene the SKProductsRequestDelegate. Is this the correct thing to do? I get the initial product info like so: SKProductsRequest *productRequest = [[SKProductsRequest alloc] initWithProductIdentifiers: productIDs]; [productRequest setDelegate: self]; [productRequest start]; The problem is that if I transition to a new scene when a request is in progress, the current layer is retained by the productRequest. This means that touches on my new scene/layer are handled by both the new layer and the old layer. I could cancel the productRequest when leaving the scene, but: I do not know if it is in progress at that point. I cannot release it because it may or may not have been released by the request delegates. There has got to be a better way to do this. I could make the delegate a class external to the current layer, but then I do not know how to easily update the layer with the product information when the handler is called.

    Read the article

  • Building Tag Cloud Declarative ADF Component

    - by Arunkumar Ramamoorthy
    When building a website, there could a requirement to add a tag cloud to let the users know the popular tags (or terms) used in the site. In this blog, we would build a simple declarative component to be used as tag cloud in the page. To start with, we would first create the declarative component, which could display the tag cloud. We will do that by creating a new custom application from the new gallery. Give a name for the app and the project and from the new gallery, let us create a new ADF Declarative Component We need to specify the name for the declarative component, attributes in it etc. as follows For displaying the tags as cloud, we need to pass the content to this component. So, we will create an attribute to hold the values for the tag. Let us name it as "value" and make it as java.lang.String  type. Once after this, to hold the component, we need to create a tag library. This can be done by clicking on the Add Tag Library button. Clicking on OK buttons in all the open dialogs would create a declarative component for us. Now, we need to display the tag cloud based on the value passed to the component. To do that, we assume that the value is a Tree Binding and has two attributes in it, say "Name" and "Weight". To make a tag cloud, we would put together the "Name" in a loop and set it's font size based on the "Weight". After putting our logic to work, here is how the source look Attributes added to the declarative components can be retrieved by using #{attrs.<attribute_name>}. Now, we need to deploy this project as ADF Library Jar file, so that this can be distributed to the consuming applications. We'll select ADF Library Jar as type and create the profile. We would be getting the jar file after deployment. To test the functionality, we could create a simple Fusion Web Application. To add our custom component to the consuming application, we can create a file system connection pointing to the location where the jar file is and add it or, add through the project properties of the ViewController project. Now, our custom component has been added to the consuming application. We could test that by creating a VO in the model project with a query like, select 'Faces' as Name,25 as Weight from dual union all select 'ADF', 15 from dual  union all select 'ADFdi', 30 from dual union all select 'BC4J', 20 from dual union all select 'EJB', 40 from dual union all select 'WS', 35 from dual Add this VO to the AppModule, so that it would be exposed to the data control. Then, we could create a jspx page, and add a tree binding to the VO created. We can now see our Tag Cloud declarative component is available in the component palette.  It can be inserted from the component palette to our page and set it's value property to CollectionModel of the tree binding created. Now that we've created the Declarative component and added that to our page successfully, we can run the page to see how it looks. As per the query, the Tags are displayed in different fonts, based on their weight.

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • workaround for ORA-03113: end-of-file on communication channel

    - by Jefferstone
    The call to TEST_FUNCTION below fails with "ORA-03113: end-of-file on communication channel". A workaround is presented in TEST_FUNCTION2. I boiled down the code as my actual function is far more complex. Tested on Oracle 11G. Anyone have any idea why the first function fails? CREATE OR REPLACE TYPE "EMPLOYEE" AS OBJECT ( employee_id NUMBER(38), hire_date DATE ); CREATE OR REPLACE TYPE "EMPLOYEE_TABLE" AS TABLE OF EMPLOYEE; CREATE OR REPLACE FUNCTION TEST_FUNCTION RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; SELECT CAST(MULTISET ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION; CREATE OR REPLACE FUNCTION TEST_FUNCTION2 RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; WITH combined AS ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) SELECT CAST(MULTISET ( SELECT * FROM combined ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION2; SELECT * FROM TABLE (TEST_FUNCTION()); -- Throws exception ORA-03113. SELECT * FROM TABLE (TEST_FUNCTION2()); -- Works

    Read the article

  • How to handle input and parameter validation between layers?

    - by developr
    If I have a 3 layer web forms application that takes user input, I know I can validate that input using validation controls in the presentation layer. Should I also validate in the business and data layers as well to protect against SQL injection and also issues? What validations should go in each layer? Another example would be passing a ID to return a record. Should the data layer ensure that the id is valid or should that happen in BLL / UI?

    Read the article

  • Save previous context

    - by Ann
    I use CALayers and I`ve override this delegate -(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context { //draw some rectangle depends on some custom parameters } Then I want to add to this layer another rectangle. And when I call [layer setNeedsDisplay] I expect to see 2 different rectangles in layer but I see only last added rectangle. So please advice me how can I do what I want. Thank you in advance.

    Read the article

  • how to cast c++ smart pointer up and down

    - by user217428
    two clients communicate to each other on top of a message layer in the message body, I need include a field pointing to any data type From client A, I send the field as a shared_ptr to the message layer. I define this field as a shared_ptr in the message layer. But how can I convert this field back to shared_ptr in client B? Or should I define shared_ptr in message layer as something else? Thanks

    Read the article

  • Special parameters for texture binding?

    - by user146780
    Do I have to set up my gl context in a certain way to bind textures. I'm following a tutorial. I start by doing: #define checkImageWidth 64 #define checkImageHeight 64 static GLubyte checkImage[checkImageHeight][checkImageWidth][4]; static GLuint texName; void makeCheckImage(void) { int i, j, c; for (i = 0; i < checkImageHeight; i++) { for (j = 0; j < checkImageWidth; j++) { c = ((((i&0x8)==0)^((j&0x8))==0))*255; checkImage[i][j][0] = (GLubyte) c; checkImage[i][j][1] = (GLubyte) c; checkImage[i][j][2] = (GLubyte) c; checkImage[i][j][3] = (GLubyte) 255; } } } void initt(void) { glClearColor (0.0, 0.0, 0.0, 0.0); makeCheckImage(); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &texName); glBindTexture(GL_TEXTURE_2D, texName); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, checkImage); engineGL.current.tex = texName; } In my rendering I do: PolygonTesselator.Begin_Contour(); glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glBindTexture(GL_TEXTURE_2D, current.tex); if(layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size() > 0) { glColor4f( layer[currentlayer].Shapes[i].Color.r, layer[currentlayer].Shapes[i].Color.g, layer[currentlayer].Shapes[i].Color.b, layer[currentlayer].Shapes[i].Color.a); } for(unsigned int j = 0; j < layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size(); ++j) { gluTessVertex(PolygonTesselator.tobj,&layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0], &layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0]); } PolygonTesselator.End_Contour(); } glDisable(GL_TEXTURE_2D); } However it still renders the color and not the texture at all. I'd atleast expect to see black or something but its as if the bind fails. Am I missing something? Thanks

    Read the article

  • JSF Data transfer between UI to Business Layers

    - by Ram
    Hi, We are using JSF in UI,Spring in Business Layer,Hibernate in Persistance layer.Now my question is how to pass data from my JSF1.1_01 UI to spring Business Layer.Can I directly used my Business object in my Backed Bean or through DTO should I transfer data between the layer? Can one explain me with clear explanation if possible with piece of code and that related websites?

    Read the article

  • Cisco FC SAN switch decision

    - by Chopper3
    I've got to buy a bunch of FC SAN switches in the next week or so, I have to, and want to, buy Cisco MDSs. Servers are HP BL490c G6's in C7000 chassis with Virtual-Connect Flex-10 ethernet interconnects and VC FC interconnects (Emulex HBAs btw), all running ESX 3.5U4 (for now). I think I've only really got two choices; MDS 9509's with dual-supervisors with a single 48-port 4Gb FC card MDS 9222i's with single supervisor and the built-in 18-FC-port/4-GigE-FCIP-port option Both have the same functionality (I think, buying the enterprise licence btw), both have plenty enough performance and adequate ports for now and the next three years. The 9222i's are about 55% the price of the 9509's - logic says get the 'i's but will I really miss the dual-supervisors? I've got lots of 9509's with dual-supervisors that I'm very happy with but I'm not sure I've every benefitted from the dual-sups in the past and they are nearly twice the price - but if I don't buy them and miss them I can't retrofit them later. What are your thoughts?

    Read the article

  • Misconfigured external monitor on Mac OS X Snow Leopard 10.6.3

    - by Mike
    I have an external monitor (specifically, an HDTV) hooked up to my 2.53GHz 13" macbook pro. This display works fine and I use it with my mac in clamshell mode (eg. with an external keyboard/mouse and the laptop closed and the built-in mac screen turned off) My Mac has multiple users on it. For User A I can use the mac with the external monitor in both clamshell and dual-monitor setups. For User B, I can use the monitor in a dual-monitor setup, but whenever I switch to clamshell mode the Mac switches to an incorrect output resolution or frequency setting that my HDTV doesn't recognize, resulting in a blank screen and a message about Unsupported Resolution. Chances are I did this to myself by misconfiguring my display settings at some point in the past, but I have no idea how to undo it. I (obviously) can't seen the display to change the settings when it's borked. I can see the display settings if I switch to Dual-monitor mode, but those settings only affect the dual monitor setup; no matter how I change the settings in dual-monitor mode, the clamshell mode setup remains borked. How can I dig myself out of this hole?

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • DIMMs: Single vs. Double vs. Quad Rank

    - by MikeyB
    What difference does the 'Rank' of DIMMs make to server memory? For example, when looking at server configurations I see the following being offered for the same server: 2GB (1x2GB) Single Rank PC3-10600 CL9 ECC DDR3-1333 VLP RDIMM 2GB (1x2GB) Dual Rank PC3-10600 CL9 ECC DDR3-1333 VLP RDIMM Given the option of Single Rank vs. Dual Rank or Dual Rank vs. Quad Rank is one always: Faster? Cheaper? Higher Bandwidth?

    Read the article

  • CPU Cores: The more the better?

    - by T Pops
    I currently have a dual-core processor at work and a quad-core at home. I've noticed both PCs are pretty equal as far as launching applications/surfing the web. The difference I can see is that my dual-core is 2.8GHz and my quad-core is 2.4GHz. Is it better to have a dual-core with a fast clock speed or a quad-core with a mediocre clock speed?

    Read the article

  • Have programmers at your work not taken up or been averse to an offer of a second monitor?

    - by Chris Knight
    I'm putting together a business case for the developers in my company to get a second monitor. After my own experiences and research, this seems a no-brainer to me in terms of increasing productivity and morale/happiness. One question which has niggled me is if I should be pushing to get all developers onto a second monitor or let folk opt-in (i.e. they get one if they want one). Thoughts on this are welcome, but my specific question relates to a snippet on this site: But when the IT manager at Thibeault's company asked other employees if they wanted dual monitors last year, few jumped at the offer. Blinded by my own pre-judgement, this surprised me. Has anyone else experienced this? I fully appreciate that some people prefer a single larger monitor, but my general experience of researching the web suggests that most programmers prefer a dual (or more) setup. I'm guessing this should be tempered with the thought that those developers who contribute to such discussions might not be considered your average developer who might not care one way or the other. Anyway, if you have experienced the above have you tried to sell the concept of dual monitors to the masses? If everyone just got 2 monitors regardless if they wanted it or not, were there adverse reactions or negative effects? UPDATE: The developers are on a mixture of 17", 22", or 24" single monitors. The desks should be able to accommodate dual 22" monitors as I am proposing, though this will take some getting used to I imagine.

    Read the article

  • Two things I learned this week...

    - by noreply(at)blogger.com (Thomas Kyte)
    I often say "I learn something new about Oracle every day".  It really is true - there is so much to know about it, it is hard to keep up sometimes.Here are the two new things I learned - the first is regarding temporary tablespaces.  In the past - when people have asked "how can I shrink my temporary tablespace" I've said "create a new one that is smaller, alter your database/users to use this new one by default, wait a bit, drop the old one".  Actually I usually said first - "don't, it'll just grow again" but some people really wanted to make it smaller.Now, there is an easier way:http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_3002.htm#SQLRF53578Using alter tablespace temp shrink space .The second thing is just a little sqlplus quirk that I probably knew at one point but totally forgot.  People run into problems with &'s in sqlplus all of the time as sqlplus tries to substitute in for an &variable.  So, if they try to select '&hello world' from dual - they'll get:ops$tkyte%ORA11GR2> select '&hello world' from dual;Enter value for hello: old   1: select '&hello world' from dualnew   1: select ' world' from dual'WORLD------ worldops$tkyte%ORA11GR2> One solution is to "set define off" to disable the substitution (or set define to some other character).  Another oft quoted solution is to use chr(38) - select chr(38)||'hello world' from dual.  I never liked that one personally.  Today - I was shown another wayhttps://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4549764300346084350#4573022300346189787 ops$tkyte%ORA11GR2> select '&' || 'hello world' from dual;'&'||'HELLOW------------&hello worldops$tkyte%ORA11GR2>just concatenate '&' to the string, sqlplus doesn't touch that one!  I like that better than chr(38) (but a little less than set define off....)

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • Openlayers - LayerRedraw() / Feature rotation / Linestring coords

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. Or figure out how to plot a triangle based off one set of coords & a heading(see below). I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug. If I were to draw a triangle with an array of points & linestring. How would I go about facing the triangle towards the heading. I have the Lon/lat of one point and the heading. var points = new Array( new OpenLayers.Geometry.Point(lon1, lat1), new OpenLayers.Geometry.Point(lon2, lat2), new OpenLayers.Geometry.Point(lon3, lat3) ); var line = new OpenLayers.Geometry.LineString(points); Looking for any way to solve this problem Image or Line anyone know how to do either added a 100rep bounty I am really stuck with this.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >