Search Results

Search found 9557 results on 383 pages for 'x86 64'.

Page 240/383 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Data model for unlockable card collection

    - by Karlos Zafra
    My game has a collection of cards (about 64) that are unlocked (one by one) provided that 4 items are collected/gained in the game. Right now the deck of cards is modeled with a plist file, like this: <plist> <dict> <key>card1</key> <dict> <key>available</key> <true/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>1</integer> </dict> <key>card2</key> <dict> <key>available</key> <false/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>2</integer> </dict> The value for the key named "available" marks if the item is locked or not. Right now I have to include the data for the items collected by the user, but including that inside the plist looks like too much hassle. What kind of structure should be more suitable for this? How do you manage a collection of unlockable items like this?

    Read the article

  • Installing Ubuntu on HP Envy 4 - 1104tx (preinstalled Windows 8)

    - by Froyo
    I have been using Linux for more than 2 years now. I had hp pavilion dv4 laptop and Ubuntu 12.04 was working great. I recently purchased HP Envy 4 - 1104tx which has Windows 8 preinstalled. I tried to install Ubuntu 12.04 but since it is not much compatible with UFEI, I downloaded 64 bit iso of ubuntu 12.10. Made a liveUSB using UnetBootin 583. I followed Installing Ubuntu on a Pre-Installed UEFI Supported Windows 8 system but still I am not able to boot with LiveUSB. I disabled secure boot. There is no option for fast boot or anything as such. It still wouldn't work. I also tried booting through Legacy, but I'm unable to install via LiveUSB. Is there any other way? I don't have SSD so no problem of fake raid. Is there some way by which I can install Ubuntu (12.04 preferred)? I don't care about Windows 8. Is there any way via which I can install Ubutnu over Windows 8? (I don't have a CD/DVD ROM).

    Read the article

  • Upgraded from ubuntu 12.04 to 12.10 issues

    - by ubuntu novice user
    I recently upgraded 12.04 to 12.10 ubuntu and then all hell broke loose. Being more specific, I have a Compaq machine and its hard disk is partitioned into 3 parts, so when I installed Ubuntu 10.04, I installed it in windows, since then have upgraded with each new ubuntu release via the update manager without any problems. I have installed the 64 bit versions. 12.10 downloaded via update manager, and initial downloading of packages was without problems, however, as it tried to install the packages, error messages appeared. The first was one about missing lib files, but I clicked to continue, since I am a relative novice on ubuntu. It continued, and during the restart process, when it was powering down, the computer hanged on the shutting down bit without rebooting for more than half an hour, so I manually shut down the machine and restarted it. Then a new error message appeared stating could not find disk, and I hit the manual fix option, and it now boots to an empty ubuntu desktop with my wallpaper but no launcher and the graphics appears as if this was put on a 640 x 480 resolution, and the screen no longer fits onto my 19" LCD. I had to use Ctrl-Alt-T to log out and then restart from there. How can I resolve this issue. Please help!

    Read the article

  • Trouble installing Ubuntu.

    - by CV13
    I have a blank 1TB hard drive that I have run ubuntu on before. I recently formatted it and am trying to reinstall Ubuntu 12.04 LTS on there. Using the Universal USB installer and the 64-bit iso file, I booted up my computer with only my 1TB hard drive connected. I go through the installation process normally, until I restart my computer at the end. Once restarted, I start to experience the problem deal with here. When I run it normally, it goes to a black screen with a blinking cursor. When I select the "Recovery Mode" option, a bunch of lines scroll across the screen, the last of which is "hostap_pci: Registered netdevice wifi0". It then stops there with a blinking cursor. When I follow the instructions on the page I linked to (replacing "quiet splash" with "nomodeset") and bunch of lines scroll through after I press Ctrl+x. The last line displayed is Adding 8386556k swap on /dev/sda5. Priority:-1 extents:1 across: 8386556k It then stops there with a blinking cursor. How do I fix this problem?

    Read the article

  • What language and tools can I use to create a simple game with child-lock (capture all key press) for Windows? [closed]

    - by scw
    I'm writing an open source program that changes colors & plays sounds when keys are pressed. I want it to run in full screen mode and have a child-lock so kids can't exit accidentally. I want it to capture all keys including ctrl alt delete. (So it's partially a game, but partially windows utility.) My target OS is Windows 7 (32 & 64 bit), keeping Windows 8 in mind. My options: Visual Studio using .net C# Windows Forms - the devil I know. But not a "game" platform, which is why I'm asking this question. Visual Studio & XNA - have never used XNA, not sure of capabilities or support future Python - What flavor, what modules, what IDE? I've never done anything with Python but I found a couple of similar open source projects in python. Something else that I don't know about? Any input is appreciated.

    Read the article

  • WebLogic Server(JRockit) - ???????????????????????????|WebLogic Channel|??????

    - by ???02
    ?????????????????????????????????????????????????JVM???????????????Oracle JRockit JVM????????????????????????????????????????JVM???????????????????????????????????????????????????????????????JRockit JVM?????????????????????????    1) ??????????????????    2) ???????????????????·???????    3) ???·??????????    4) ????·??????????????    5) ?????????????    6) ????·????????????????1) ??????????????????????????????????????????????Java????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·?????????????????????????????????????JVM?????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????JVM?????????????    ¦Oracle JRockit Mission Control??JRockit????·?????????????????????????·????·??????????·???????????????·???????????????????????????????????????????????????????????????·????·?????????????????Oracle JRockit Mission Control?????·?????????????    ¦-Xverbose???????????????????????????-Xverbose:memdbg,gcpause,gcreport????????????????·????????????????????????????????-Xverbose:memdbg????????????????·????????????????????????????·???????????????????????Java????????????????????????????????????????????????????????JVM???????????????2) ???????????????????·???????????????????????????????JRockit JVM??????????1?????????????·??????·??????????????????????????????????    1) ????????????????????????·??????·???    ????JRockit JVM???????????·??????·???????????????????????????·??????????????????·????????????????    2)??????????·????·??????    ???????·??????·??????????????????????????·??????????????????????????·????·???????????????????????????????????????????????    3)???????·????·????·????·??????    ???????·??????·??????????????????????????·?????????????????????·????·????·????·???????????????????????????????????????????????????·????????????????Oracle JRockit???????·??????·?????4.2??????·?????????????????????????????2.1) ????????????????????????·??????·???JRockit JVM???????????·??????·???(??????????·????????????)??????????????·??????????????????????????????????????????????·????·????????????????????·??????????????????????·???????????????????????????????????????·??????·?????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·??????·?????JRockit JVM???????????·????????????????????????·??????????????????????java -Xgc:throughput myApplication 2.2) ???????????·????·????·?????? ???????·????????????????????????????????????????·????·?????????????????????????????????????????????????????????·????·????(-Xgc:singlepar)????????????????????·???????????????????????????????????????????? 2.3) ??????????·????·?????? ??????????????????????????????????????????????????????????????????·????·????(-Xgc:genpar)??????????????????????????????????????????????????????????????????·????·???????????????????????????????·????????????????????????????????????????????3) ???·???????????????????·????64MB??????3GB(64???·???????)???1GB(32???·???????)??????????????????????·???????????????????????????????(?????1GB???????)????????????????????????-Xms(?????·???)?-Xmx(?????·???)????????·??????????????·??????????????????????-Xms?-Xmx?????????????????????????????????????????????:java -Xms:2g -Xmx:2g myApplication???????????·??????????????????????????????????Oracle JRockit???????·??????·?????4.4????????????????????????????????4) ????·??????????????????(????)??????????·????(-Xgc:throughput?-Xgc:genpar???-Xgc:gencon)????????????????????????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????    -Xgc:throughput???-Xgc:genpar???????????·????????????????????????????????????????·??????????????    -Xgc:gencon???????????????????·?????????????????·????????????????????????????·????????????????????????????????????(??????????·??????)????????????(?????????·??????)????????????????????????????????????·???????????????????????????????????????????????????·???????????????????????????????????-Xns?????????????:java -Xgc:gencon -Xms:2g -Xmx:2g -Xns:512m myApplication5) ???????????????????????????????????????????????????????????????????????JRockit JVM????????????????????????????????????·???????????????(-Xgc)?????????????????????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????2?????????????????????????????????????????????????????????·?????????????????????????????????????·?????????????????????????????????????????????????????????????????????????Oracle JRockit???????·??????·?????4.3??????????????????????6) ????·??????????????????TLA??????????Java???????????????????????????????????????????????????????????TLA???????????????????????Oracle JRockit JVM?????TLA?????????????????????????????????????????????TLA??????????????????????????????????????????????????????????????????????????????JRockit????·????????????????????????????????????????????????????????????????????????TLA???????????????????????????????????????????????????TLA??????????????Oracle JRockit???????·??????·?????4.4.1??????·??????????????????????????

    Read the article

  • Java EE 6?????????????????! ?Java EE?????“????”??????????Java EE 6????????·????????????????????

    - by ???02
    ?Java EE????????????????Java EE 6????????????????????????Java EE 6??????????·????????????????????????????????????????????????????????????????Java EE 6????????·??????????????????Java EE??????????????????Java EE 6?????WebLogic Server 12c?GlassFish 3.x????????????????????????????/????????????????????????????????????????????????????(????) ?Java EE?????“???”?????????????????! ?????? ?????????????? ????????????????????????? ????&IT????????????????? ??????????????????? Java EE?????????·?????????????????????????????J2EE 1.4??Java EE??????????/?????????????????????????????????????????????Java EE??????5???????????????????????????????????????Java EE????????????????????????? ?????????????????????????????????????????????????????????????????????????????????Java EE?????????????????????????????????????Java EE????????????????????????????????????????????????????????????????????????????? ????????????????????????????????????Java EE 6????????·???????????????? ?????????????????????Java EE????????????????????Java EE??????????????????????????????????????????????????·????????????????????Java EE 6????????·???????????????????????????????????????????Java EE 6????????????????????????????????????????????/????????????(?????????????? ????????????????????????? ????&IT????????????????? ?????????????)?? ??????????????????????????????????Java EE 6????????·??????????????????????????????????????????Java EE???????Java EE 6??????????Java EE 6?????????????·?????????????????????????????????????????????????????????????1????(???) ??????????????????? ??????????Java EE 6????????·???????????????Java EE?????????“????”???Java EE 6???????????????????????????????????????????????????? ????????????????????????Java?????(JVM)?????????????????????????????????????????????????????????????????????????????????????64?????????????????·???????????????????????? ????WebLogic Server 12c????????????????JRockit Flight Recorder????????????????????????????????????????????????????????????????Java EE 6?????????????????????????? ?????????Java EE??????????????????????·??????? ??????????????????????????????????????????????Java EE????????????????·????????????????????????????????????????????·????????????????????????? ????????????“Write Once, Run Anywhere”??????????????????????????????·?????????????????????????????·??????????????????????????????????(???) ?????Java????????·??????? ?????????????????????????????????????SIer?????????????????????Java EE???????????????????????????????Java EE????????·??????????????????????? ????????Java???????????????????????????????????????????????????Java???????????????????????????????????????????????????? ??????/????????????????????????????????????????????????????????????????????? ?????????????Java?????????????????????????????????????Java EE??????????·??????????????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????(???) ???Java EE 6??????????????????? ?????Java EE?????????????????????????????Java EE 6????????·????????????????????????????????????J2EE 1.3?J2EE 1.4??????????????????????????????????????? ?2000?????J2EE????????????????????????????????????????SIer??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????(???) ?????????????????????????????????????Java EE 6?????????????????????????????????Java EE 6????????·????????????????????????????????????????????????Java EE?????????????????????????????????????????????

    Read the article

  • LinkedIn API returns 'Unauthorized' response (PHP OAuth)

    - by Jim Greenleaf
    I've been struggling with this one for a few days now. I've got a test app set up to connect to LinkedIn via OAuth. I want to be able to update a user's status, but at the moment I'm unable to interact with LinkedIn's API at all. I am able to successfully get a requestToken, then an accessToken, but when I issue a request to the API, I see an 'unauthorized' error that looks something like this: object(OAuthException)#2 (8) { ["message:protected"]=> string(73) "Invalid auth/bad request (got a 401, expected HTTP/1.1 20X or a redirect)" ["string:private"]=> string(0) "" ["code:protected"]=> int(401) ["file:protected"]=> string(47) "/home/pmfeorg/public_html/dev/test/linkedin.php" ["line:protected"]=> int(48) ["trace:private"]=> array(1) { [0]=> array(6) { ["file"]=> string(47) "/home/pmfeorg/public_html/dev/test/linkedin.php" ["line"]=> int(48) ["function"]=> string(5) "fetch" ["class"]=> string(5) "OAuth" ["type"]=> string(2) "->" ["args"]=> array(2) { [0]=> string(35) "http://api.linkedin.com/v1/people/~" [1]=> string(3) "GET" } } } ["lastResponse"]=> string(358) " 401 1276375790558 0000 [unauthorized]. OAU:Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy|48032b2d-bc8c-4744-bb84-4eab53578c11|*01|*01:1276375790:xmc3lWhXJvLSUZh4dxMtrf55VVQ= " ["debugInfo"]=> array(5) { ["sbs"]=> string(329) "GET&http%3A%2F%2Fapi.linkedin.com%2Fv1%2Fpeople%2F~&oauth_consumer_key%3DBhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy%26oauth_nonce%3D7068001084c13f2ee6a2117.22312548%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1276375790%26oauth_token%3D48032b2d-bc8c-4744-bb84-4eab53578c11%26oauth_version%3D1.0" ["headers_sent"]=> string(401) "GET /v1/people/~?GET&oauth_consumer_key=Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy&oauth_signature_method=HMAC-SHA1&oauth_nonce=7068001084c13f2ee6a2117.22312548&oauth_timestamp=1276375790&oauth_version=1.0&oauth_token=48032b2d-bc8c-4744-bb84-4eab53578c11&oauth_signature=xmc3lWhXJvLSUZh4dxMtrf55VVQ%3D HTTP/1.1 User-Agent: PECL-OAuth/1.0-dev Host: api.linkedin.com Accept: */*" ["headers_recv"]=> string(148) "HTTP/1.1 401 Unauthorized Server: Apache-Coyote/1.1 Date: Sat, 12 Jun 2010 20:49:50 GMT Content-Type: text/xml;charset=UTF-8 Content-Length: 358" ["body_recv"]=> string(358) " 401 1276375790558 0000 [unauthorized]. OAU:Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy|48032b2d-bc8c-4744-bb84-4eab53578c11|*01|*01:1276375790:xmc3lWhXJvLSUZh4dxMtrf55VVQ= " ["info"]=> string(216) "About to connect() to api.linkedin.com port 80 (#0) Trying 64.74.98.83... connected Connected to api.linkedin.com (64.74.98.83) port 80 (#0) Connection #0 to host api.linkedin.com left intact Closing connection #0 " } } My code looks like this (based on the FireEagle example from php.net): $req_url = 'https://api.linkedin.com/uas/oauth/requestToken'; $authurl = 'https://www.linkedin.com/uas/oauth/authenticate'; $acc_url = 'https://api.linkedin.com/uas/oauth/accessToken'; $api_url = 'http://api.linkedin.com/v1/people/~'; $callback = 'http://www.pmfe.org/dev/test/linkedin.php'; $conskey = 'Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy'; $conssec = '####################SECRET KEY#####################'; session_start(); try { $oauth = new OAuth($conskey,$conssec,OAUTH_SIG_METHOD_HMACSHA1,OAUTH_AUTH_TYPE_URI); $oauth->enableDebug(); if(!isset($_GET['oauth_token'])) { $request_token_info = $oauth->getRequestToken($req_url); $_SESSION['secret'] = $request_token_info['oauth_token_secret']; header('Location: '.$authurl.'?oauth_token='.$request_token_info['oauth_token']); exit; } else { $oauth->setToken($_GET['oauth_token'],$_SESSION['secret']); $access_token_info = $oauth->getAccessToken($acc_url); $_SESSION['token'] = $access_token_info['oauth_token']; $_SESSION['secret'] = $access_token_info['oauth_token_secret']; } $oauth->setToken($_SESSION['token'],$_SESSION['secret']); $oauth->fetch($api_url, OAUTH_HTTP_METHOD_GET); $response = $oauth->getLastResponse(); } catch(OAuthException $E) { var_dump($E); } I've successfully set up a connection to Twitter and one to Facebook using OAuth, but LinkedIn keeps eluding me. If anyone could offer some advice or point me in the right direction, I will be extremely appreciative!

    Read the article

  • WCF MustUnderstand headers are not understood

    - by raghur
    Hello everyone, I am using a Java Web Service which is developed by one of our vendor which I really do not have any control over it. I have written a WCF router which the client application calls it and the router sends the message to the Java Web Service and returns the data back to the client. The issue what I am encountering is, I am successfully able to call the Java web service from the WCF router, but, I am getting the following exceptions back. Router config file is as follows: <customBinding> <binding name="SimpleWSPortBinding"> <!--<reliableSession maxPendingChannels="4" maxRetryCount="8" ordered="true" />--> <!--<mtomMessageEncoding messageVersion ="Soap12WSAddressing10" ></mtomMessageEncoding>--> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12WSAddressing10" writeEncoding="utf-8" /> <httpTransport manualAddressing="false" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="true" keepAliveEnabled="true" maxBufferSize="65536" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false"/> </binding> </customBinding> Test client config file <customBinding> <binding name="DocumentRepository_Binding_Soap12"> <!--<reliableSession maxPendingChannels="4" maxRetryCount="8" ordered="true" />--> <!--<mtomMessageEncoding messageVersion ="Soap12WSAddressing10" ></mtomMessageEncoding>--> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12WSAddressing10" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> <httpTransport manualAddressing="false" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" keepAliveEnabled="true" maxBufferSize="65536" proxyAuthenticationScheme="Anonymous" realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false" useDefaultWebProxy="true" /> </binding> </customBinding> If I use the textMessageEncoding I am getting <soap:Text xml:lang="en">MustUnderstand headers: [{http://www.w3.org/2005/08/addressing}To, {http://www.w3.org/2005/08/addressing}Action] are not understood.</soap:Text> If I use mtomMessageEncoding I am getting The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. My Router class is as follows: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple, AddressFilterMode = AddressFilterMode.Any, ValidateMustUnderstand = false)] public class EmployeeService : IEmployeeService { public System.ServiceModel.Channels.Message ProcessMessage(System.ServiceModel.Channels.Message requestMessage) { ChannelFactory<IEmployeeService> factory = new ChannelFactory<IEmployeeService>("client"); factory.Endpoint.Behaviors.Add(new MustUnderstandBehavior(false)); IEmployeeService proxy = factory.CreateChannel(); Message responseMessage = proxy.ProcessMessage(requestMessage); return responseMessage; } } The "client" in the above code under ChannelFactory is defined in the config file as: <client> <endpoint address="http://JavaWS/EmployeeService" binding="wsHttpBinding" bindingConfiguration="wsHttp" contract="EmployeeService.IEmployeeService" name="client" behaviorConfiguration="clientBehavior"> <headers> </headers> </endpoint> </client> Really appreciate your kind help. Thanks in advance, Raghu

    Read the article

  • Using MS Standalone profiler in VS2008 Professional

    - by fishdump
    I am trying to profile my .NET dll while running it from VS unit testing tools but I am having problems. I am using the standalone command-line profiler as VS2008 Professional does not come with an inbuilt profiler. I have an open CMD window and have run the following commands (I instrumented it earlier which is why vsinstr gave the warning that it did): C:\...\BusinessRules\obj\Debug>vsperfclrenv /samplegclife /tracegclife /globalsamplegclife /globaltracegclife Enabling VSPerf Sampling Attach Profiling. Allows to 'attaching' to managed applications. Current Profiling Environment variables are: COR_ENABLE_PROFILING=1 COR_PROFILER={0a56a683-003a-41a1-a0ac-0f94c4913c48} COR_LINE_PROFILING=1 COR_GC_PROFILING=2 C:\...\BusinessRules\obj\Debug>vsinstr BusinessRules.dll Microsoft (R) VSInstr Post-Link Instrumentation 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Error VSP1018 : VSInstr does not support processing binaries that are already instrumented. C:\...\BusinessRules\obj\Debug>vsperfcmd /start:trace /output:foo.vsp Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. C:\...\BusinessRules\obj\Debug> I then ran the unit tests that exercised the instrumented code. When the unit tests were complete, I did... C:\...\BusinessRules\obj\Debug>vsperfcmd /shutdown Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Waiting for process 4836 ( C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\vstesthost.exe) to shutdown... It was clearly waiting for VS2008 to close so I closed it... Shutting down the Profile Monitor ------------------------------------------------------------ C:\...\BusinessRules\obj\Debug> All looking good, there was a 3.2mb foo.vsp file in the directory. I next did... C:\...\BusinessRules\obj\Debug>vsperfreport foo.vsp /summary:all Microsoft (R) VSPerf Report Generator, Version 9.0.0.0 Copyright (C) Microsoft Corporation. All rights reserved. VSP2340: Environment variables were not properly set during profiling run and managed symbols may not resolve. Please use vsperfclrenv before profiling. File opened Successfully opened the file. A report file, foo_Header.csv, has been generated. A report file, foo_MarksSummary.csv, has been generated. A report file, foo_ProcessSummary.csv, has been generated. A report file, foo_ThreadSummary.csv, has been generated. Analysis completed A report file, foo_FunctionSummary.csv, has been generated. A report file, foo_CallerCalleeSummary.csv, has been generated. A report file, foo_CallTreeSummary.csv, has been generated. A report file, foo_ModuleSummary.csv, has been generated. C:\...\BusinessRules\obj\Debug> Notice the warning about environment variables and using vsperfclrenv? But I had run it! Maybe I used the wrong switches? I don't know. Anyway, loading the csv files into Excel or using the perfconsole tool gives loads of useful info with useless symbol names: *** Loading commands from: C:\temp\PerfConsole\bin\commands\timebytype.dll *** Adding command: timebytype *** Loading commands from: C:\temp\PerfConsole\bin\commands\partition.dll *** Adding command: partition Welcome to PerfConsole 1.0 (for bugs please email: [email protected]), for help type: ?, for a quickstart type: ?? > load foo.vsp *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Profile loaded from 'foo.vsp' into @foo > > functions @foo >>>>> Function Name Exclusive Inclusive Function Name Module Name -------------------- -------------------- -------------- --------------- 900,798,600,000.00 % 900,798,600,000.00 % 0x0600003F 20397910 14,968,500,000.00 % 44,691,540,000.00 % 0x06000040 14736385 8,101,253,000.00 % 14,836,330,000.00 % 0x06000041 5491345 3,216,315,000.00 % 6,876,929,000.00 % 0x06000042 3924533 <snip> 71,449,430.00 % 71,449,430.00 % 0x0A000074 42572 52,914,200.00 % 52,914,200.00 % 0x0A000073 0 14,791.00 % 13,006,010.00 % 0x0A00007B 0 199,177.00 % 6,082,932.00 % 0x2B000001 5350072 2,420,116.00 % 2,420,116.00 % 0x0A00008A 0 836.00 % 451,888.00 % 0x0A000045 0 9,616.00 % 399,436.00 % 0x0A000039 0 18,202.00 % 298,223.00 % 0x06000046 1479900 I am so close to being able to find the bottlenecks, if only it will give me the function and module names instead of hex numbers! What am I doing wrong? --- Alistair.

    Read the article

  • How do I use C# and ADO.NET to query an Oracle table with a spatial column of type SDO_GEOMETRY?

    - by John Donahue
    My development machine is running Windows 7 Enterprise, 64-bit version. I am using Visual Studio 2010 Release Candidate. I am connecting to an Oracle 11g Enterprise server version 11.1.0.7.0. I had a difficult time locating Oracle client software that is made for 64-bit Windows systems and eventually landed here to download what I assume is the proper client connectivity software. I added a reference to "Oracle.DataAccess" which is version 2.111.6.0 (Runtime Version is v2.0.50727). I am targeting .NET CLR version 4.0 since all properties of my VS Solution are defaults and this is 2010 RC. I was then able to write a console application in C# that established connectivity, executed a SELECT statement, and properly returned data when the table in question does NOT contain a spatial column. My problem is that this no longer works when the table I query has a column of type SDO_GEOMETRY in it. Below is the simple console application I am trying to run that reproduces the problem. When the code gets to the line with the "ExecuteReader" command, an exception is raised and the message is "Unsupported column datatype". using System; using System.Data; using Oracle.DataAccess.Client; namespace ConsoleTestOracle { class Program { static void Main(string[] args) { string oradb = string.Format("Data Source={0};User Id={1};Password={2};", "hostname/servicename", "login", "password"); try { using (OracleConnection conn = new OracleConnection(oradb)) { conn.Open(); OracleCommand cmd = new OracleCommand(); cmd.Connection = conn; cmd.CommandText = "select * from SDO_8307_2D_POINTS"; cmd.CommandType = CommandType.Text; OracleDataReader dr = cmd.ExecuteReader(); } } catch (Exception e) { string error = e.Message; } } } } The fact that this code works when used against a table that does not contain a spatial column of type SDO_GEOMETRY makes me think I have my windows 7 machine properly configured so I am surprised that I get this exception when the table contains different kinds of columns. I don't know if there is some configuration on my machine or the Oracle machine that needs to be done, or if the Oracle client software I have installed is wrong, or old and needs to be updated. Here is the SQL I used to create the table, populate it with some rows containing points in the spatial column, etc. if you want to try to reproduce this exactly. SQL Create Commands: create table SDO_8307_2D_Points (ObjectID number(38) not null unique, TestID number, shape SDO_GEOMETRY); Insert into SDO_8307_2D_Points values (1, 1, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 10.0))); Insert into SDO_8307_2D_Points values (2, 2, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 20.0))); insert into user_sdo_geom_metadata values ('SDO_8307_2D_Points', 'SHAPE', SDO_DIM_ARRAY(SDO_DIM_ELEMENT('Lat', -180, 180, 0.05), SDO_DIM_ELEMENT('Long', -90, 90, 0.05)), 8307); create index SDO_8307_2D_Point_indx on SDO_8307_2D_Points(shape) indextype is mdsys.spatial_index PARAMETERS ('sdo_indx_dims=2' ); Any advice or insights would be greatly appreciated. Thank you.

    Read the article

  • SubSonic Stored Procedure Issue - Data Generated at Stored Procedure is different from Data Received

    - by ShaShaIn
    Hi All, I am facing a unknown problem while using stored procedure with SubSonic. I have written a stored procedure & application code that takes first name & last name as input parameter and return last login id as ouput parameter. It creates login id as first character of first name & complete last name for no-existing login id otherwise it adds 1 in the last login id e.g. First Name - Mark, Last Name - Waugh, First Login Id - MWaugh, Second Login Id - MWaugh1, Third Login Id - MWaugh2 etc. Stored Procedure SET ANSI_NULLS OFF GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[Users_FetchLoginId] ( @FirstName nvarchar(64), @LastName nvarchar(64), @LoginId nvarchar(256) OUTPUT ) AS DECLARE @UserId nvarchar(256); SET @UserId = NULL; SET @LoginId = NULL; SELECT @UserId = LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName)) IF @@rowcount = 0 OR @UserId IS NULL BEGIN SET @LoginId = (SUBSTRING(@FirstName, 1, 1) + @LastName); print @LoginId RETURN 1; END ELSE BEGIN SELECT TOP 1 LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName + '%')) ORDER BY LoweredUserName DESC RETURN 2; END Application Code public string FetchLoginId(string firstName, string lastName) { SubSonic.StoredProcedure sp = SPs.UsersFetchLoginId( firstName, lastName, null ); sp.Command.AddReturnParameter(); sp.Execute(); if (sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue != System.DBNull.Value) { int returnCode = Convert.ToInt32(sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue, CultureInfo.InvariantCulture); if (returnCode == 1) { // UserName as First Character of First Name & Full Last Name return sp.Command.Parameters[2].ParameterValue.ToString(); } if (returnCode == 2) { DataSet ds = sp.GetDataSet(); if (null == ds || null == ds.Tables[0] || 0 == ds.Tables[0].Rows.Count) return ""; string maxLoginId = ds.Tables[0].Rows[0]["LoweredUserName"].ToString(); string initialLoginId = firstName.Substring(0, 1) + lastName; int maxLoginIdIndex = 0; int initialLoginIdLength = initialLoginId.Length; if (maxLoginId.Substring(initialLoginIdLength).Length == 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix (Here, First Incrementer i.e. 1) return (initialLoginId + maxLoginIdIndex); } if (int.TryParse(maxLoginId.Substring(initialLoginIdLength), out maxLoginIdIndex)) { if (maxLoginIdIndex > 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix return (initialLoginId + maxLoginIdIndex); } } } } Now the problem is for some input (see test data below), the login id created at sql server end correctly but at application subsonic dal side, it truncates some characters. First Name - Jenelia and Last Name - Kanupatikenalaalayampentyalavelugoplansubhramanayam [dbo].[Users_FetchLoginId] - Execute Stored Procedure Separately - Login Id Is Correct JKanupatikenalaalayampentyalavelugoplansubhramanayam public string FetchLoginId(string firstName, string lastName) - Application Code DAL Side - LginId Is Wrongly Received From Stored Procedure JKanupatikenalaalayampentyalavelugoplansubhramanay You can easily see that 2 charactes are removed. If the data is correctly generated by stored procedure then why the characters are removed when data is received in output parameter of stored procedure? Is it due to any internal known or unknown bug of SubSonic? Your help is significant. Thanks in advance...

    Read the article

  • Reading XML element & child nodes using LINQ in Vb.net- help me in where condition :(

    - by New Linq Baby
    Hi, I am new in LINQ world. I need an urgent help in reading the xml elements using LINQ with specific where condition. I need to find the max air_temp for a county i.e where county name = "Boone" and hour id = "06/03/2009 09:00CDT" i tried something like below, but no luck : Dim custs As IEnumerable = From c In Element.Load("C:\meridian.xml").Elements("county") _ Select c.Elements("hour").Elements("air_temp").Max() For Each x In custs Response.Write(custs(0).ToString()) Next ------------------- here is the xml file : <forecasts> <issued>06/02/2009 12:00CDT</issued> - <county name="Adair"> - <hour id="06/02/2009 12:00CDT"> <air_temp>61</air_temp> <cloud_cover>overcast</cloud_cover> <dew_point>59</dew_point> <precip_prob>90</precip_prob> <precip_rate>0.12</precip_rate> <precip_type>rain</precip_type> <snow_rate>0.0</snow_rate> <wind_direction>NE</wind_direction> <wind_speed>12</wind_speed> <dew_point_confidence>-3/+3</dew_point_confidence> <road_temp>64</road_temp> <road_frost_prob>0</road_frost_prob> <road_potential_evap_rate>429</road_potential_evap_rate> <road_temp_confidence>-3/+2</road_temp_confidence> <dew_point_confidence>-3/+3</dew_point_confidence> <bridge_temp>63</bridge_temp> <bridge_temp_confidence>-4/+2</bridge_temp_confidence> <bridge_frost>NO</bridge_frost> </hour> - <hour id="06/02/2009 13:00CDT"> <air_temp>61</air_temp> <cloud_cover>overcast</cloud_cover> <dew_point>60</dew_point> <precip_prob>70</precip_prob> <precip_rate>0.01</precip_rate> <precip_type>rain</precip_type> <snow_rate>0.0</snow_rate> <wind_direction>ENE</wind_direction> <wind_speed>10</wind_speed> <dew_point_confidence>-3/+3</dew_point_confidence> <road_temp>65</road_temp> <road_frost_prob>0</road_frost_prob> <road_potential_evap_rate>411</road_potential_evap_rate> <road_temp_confidence>-3/+2</road_temp_confidence> <dew_point_confidence>-3/+3</dew_point_confidence> <bridge_temp>64</bridge_temp> <bridge_temp_confidence>-4/+1</bridge_temp_confidence> <bridge_frost>NO</bridge_frost> </hour>

    Read the article

  • Constant Memory Leak in SpeechSynthesizer

    - by DudeFX
    I have developed a project which I would like to release which uses c#, WPF and the System.Speech.Synthesizer object. The issue preventing the release of this project is that whenever SpeakAsync is called it leaves a memory leak that grows to the point of eventual failure. I believe I have cleaned up properly after using this object, but cannot find a cure. I have run the program through Ants Memory Profiler and it reports that WAVEHDR and WaveHeader is growing with each call. I have created a sample project to try to pinpoint the cause, but am still at a loss. Any help would be appreciated. The project uses VS2008 and is a c# WPF project that targets .NET 3.5 and Any CPU. You need to manually add a reference to System.Speech. Here is the Code: <Window x:Class="SpeechTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <StackPanel Orientation="Vertical"> <Button Content="Start Speaking" Click="Start_Click" Margin="10" /> <Button Content="Stop Speaking" Click="Stop_Click" Margin="10" /> <Button Content="Exit" Click="Exit_Click" Margin="10"/> </StackPanel> </Grid> // Start of code behind using System; using System.Windows; using System.Speech.Synthesis; namespace SpeechTest { public partial class Window1 : Window { // speak setting private bool speakingOn = false; private int curLine = 0; private string [] speakLines = { "I am wondering", "Why whenever Speech is called", "A memory leak occurs", "If you run this long enough", "It will eventually crash", "Any help would be appreciated" }; public Window1() { InitializeComponent(); } private void Start_Click(object sender, RoutedEventArgs e) { speakingOn = true; SpeakLine(); } private void Stop_Click(object sender, RoutedEventArgs e) { speakingOn = false; } private void Exit_Click(object sender, RoutedEventArgs e) { App.Current.Shutdown(); } private void SpeakLine() { if (speakingOn) { // Create our speak object SpeechSynthesizer spk = new SpeechSynthesizer(); spk.SpeakCompleted += new EventHandler(spk_Completed); // Speak the line spk.SpeakAsync(speakLines[curLine]); } } public void spk_Completed(object sender, SpeakCompletedEventArgs e) { if (sender is SpeechSynthesizer) { // get access to our Speech object SpeechSynthesizer spk = (SpeechSynthesizer)sender; // Clean up after speaking (thinking the event handler is causing the memory leak) spk.SpeakCompleted -= new EventHandler(spk_Completed); // Dispose the speech object spk.Dispose(); // bump it curLine++; // check validity if (curLine = speakLines.Length) { // back to the beginning curLine = 0; } // Speak line SpeakLine(); } } } } I run this program on Windows 7 64 bit and it will run and eventually halt when attempting to create a new SpeechSynthesizer object. When run on Windows Vista 64 bit the memory will grow from a starting point of 34k to so far about 400k and growing. Can anyone see anything in the code that might be causing this, or is this an issue with the Speech object itself. Any help would be appreciated.

    Read the article

  • NHibernate MySQL Composite-Key

    - by LnDCobra
    I am trying to create a composite key that mimicks the set of PrimaryKeys in the built in MySQL.DB table. The Db primary key is as follows: Field | Type | Null | ---------------------------------- Host | char(60) | No | Db | char(64) | No | User | char(16) | No | This is my DataBasePrivilege.hbm.xml file <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="TGS.MySQL.DataBaseObjects" namespace="TGS.MySQL.DataBaseObjects"> <class name="TGS.MySQL.DataBaseObjects.DataBasePrivilege,TGS.MySQL.DataBaseObjects" table="db"> <composite-id name="CompositeKey" class="TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey, TGS.MySQL.DataBaseObjects"> <key-property name="Host" column="Host" type="char" length="60" /> <key-property name="DataBase" column="Db" type="char" length="64" /> <key-property name="User" column="User" type="char" length="16" /> </composite-id> </class> </hibernate-mapping> The following are my 2 classes for my composite key: namespace TGS.MySQL.DataBaseObjects { public class DataBasePrivilege { public virtual DataBasePrivilegePrimaryKey CompositeKey { get; set; } } public class DataBasePrivilegePrimaryKey { public string Host { get; set; } public string DataBase { get; set; } public string User { get; set; } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; if (obj.GetType() != typeof (DataBasePrivilegePrimaryKey)) return false; return Equals((DataBasePrivilegePrimaryKey) obj); } public bool Equals(DataBasePrivilegePrimaryKey other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return Equals(other.Host, Host) && Equals(other.DataBase, DataBase) && Equals(other.User, User); } public override int GetHashCode() { unchecked { int result = (Host != null ? Host.GetHashCode() : 0); result = (result*397) ^ (DataBase != null ? DataBase.GetHashCode() : 0); result = (result*397) ^ (User != null ? User.GetHashCode() : 0); return result; } } } } And the following is the exception I am getting: Execute System.InvalidCastException: Unable to cast object of type 'System.Object[]' to type 'TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey'. at (Object , GetterCallback ) at NHibernate.Bytecode.Lightweight.AccessOptimizer.GetPropertyValues(Object target) at NHibernate.Tuple.Component.PocoComponentTuplizer.GetPropertyValues(Object component) at NHibernate.Type.ComponentType.GetPropertyValues(Object component, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode, ISessionFactoryImplementor factory) at NHibernate.Engine.EntityKey.GenerateHashCode() at NHibernate.Engine.EntityKey..ctor(Object identifier, String rootEntityName, String entityName, IType identifierType, Boolean batchLoadable, ISessionFactoryImplementor factory, EntityMode entityMode) at NHibernate.Engine.EntityKey..ctor(Object id, IEntityPersister persister, EntityMode entityMode) at NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.Get(String entityName, Object id) at NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id) at NHibernate.Impl.SessionImpl.Get[T](Object id) at TGS.MySQL.DataBase.DataProvider.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.MySQL.DataBase\DataProvider.cs:line 20 at TGS.UserAccountControl.UserAccountManager.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControl\UserAccountManager.cs:line 10 at TGS.UserAccountControlTest.UserAccountManagerTest.CanGetDataBasePrivilegeByHostDbUser() in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControlTest\UserAccountManagerTest.cs:line 12 I am new to NHibernate and any help would be appreciated. I just can't see where it is getting the object[] from? Is the composite key supposed to be object[]?

    Read the article

  • FreeType2 Bitmap to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); // as *.bmp Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; }

    Read the article

  • Problem with Freetype and OpenGL

    - by Morgan
    Hey, i'm having a weird issue with drawing text in openGL loaded with the Freetype 2 library. Here is a screenshot of what I'm seeing. http://img203.imageshack.us/img203/3316/freetypeweird.png Here are my code bits for loading and rendering my text. class Font { Font(const String& filename) { if (FT_New_Face(Font::ftLibrary, "arial.ttf", 0, &mFace)) { cout << "UH OH!" << endl; } FT_Set_Char_Size(mFace, 16 * 64, 16 * 64, 72, 72); } Glyph* GetGlyph(const unsigned char ch) { if(FT_Load_Char(mFace, ch, FT_LOAD_RENDER)) cout << "OUCH" << endl; FT_Glyph glyph; if(FT_Get_Glyph( mFace->glyph, &glyph )) cout << "OUCH" << endl; FT_BitmapGlyph bitmap_glyph = (FT_BitmapGlyph)glyph; Glyph* thisGlyph = new Glyph; thisGlyph->buffer = bitmap_glyph->bitmap.buffer; thisGlyph->width = bitmap_glyph->bitmap.width; thisGlyph->height = bitmap_glyph->bitmap.rows; return thisGlyph; } }; The relevant glyph information (width, height, buffer) is stored in the following struct struct Glyph { GLubyte* buffer; Uint width; Uint height; }; And finally, to render it, I have this class called RenderFont. class RenderFont { RenderFont(Font* font) { mTextureIds = new GLuint[128]; mFirstDisplayListId=glGenLists(128); glGenTextures( 128, mTextureIds ); for(unsigned char i=0;i<128;i++) { MakeDisplayList(font, i); } } void MakeDisplayList(Font* font, unsigned char ch) { Glyph* glyph = font->GetGlyph(ch); glBindTexture( GL_TEXTURE_2D, mTextureIds[ch]); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, glyph->width, glyph->height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, glyph->buffer); glNewList(mFirstDisplayListId+ch,GL_COMPILE); glBindTexture(GL_TEXTURE_2D, mTextureIds[ch]); glBegin(GL_QUADS); glTexCoord2d(0,1); glVertex2f(0,glyph->height); glTexCoord2d(0,0); glVertex2f(0,0); glTexCoord2d(1,0); glVertex2f(glyph->width,0); glTexCoord2d(1,1); glVertex2f(glyph->width,glyph->height); glEnd(); glTranslatef(16, 0, 0); glEndList(); } void Draw(const String& text, Uint size, const TransformComponent* transform, const Color32* color) { glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glTranslatef(100, 250, 0.0f); glListBase(mFirstDisplayListId); glCallLists(text.length(), GL_UNSIGNED_BYTE, text.c_str()); glDisable(GL_TEXTURE_2D); glDisable(GL_BLEND); glLoadIdentity(); } private: GLuint mFirstDisplayListId; GLuint* mTextureIds; }; Can anybody see anything weird going on here that would cause the garbled text? It's strange because if I change the font size, or the DPI, then some of the letters that display correctly become garbled, and other letters that were garbled before then display correctly.

    Read the article

  • unsigned char* buffer (FreeType2 Bitmap) to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } // as *.bmp array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; } Update FT_Bitmap *bitmap = &face->glyph->bitmap; // stride must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat); array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); target->UnlockBits(bitmapData);

    Read the article

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • maxItemsInObjectGraph ignored...

    - by Palantir
    Hi! I have a problem with a WCF service, which tries to serialize too much data. From the trace I get an error which says that the maximum number of elements that can be serialized or unserialized is '65536', try to increment the MaxItemsInObjectGraph quota. So I went and modified this value, but it is just ignored (the error is the same, with the same number). All this is server-side. I am calling the service via wget for the moment. My web config is like this: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceBehavior"> <dataContractSerializer maxItemsInObjectGraph="131072" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="customBinding0" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:02:00"> <binaryMessageEncoding> <readerQuotas maxDepth="64" maxStringContentLength="16384" maxArrayLength="16384" maxBytesPerRead="16384" maxNameTableCharCount="16384" /> </binaryMessageEncoding> <httpTransport /> </binding> </customBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service behaviorConfiguration="AlgoMap.Web.MapService.MapServiceBehavior" name="AlgoMap.Web.MapService.MapService"> <endpoint address="" binding="customBinding" bindingConfiguration="customBinding0" contract="AlgoMap.Web.MapService.MapService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> Version 2, not working either: <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceEndpointBehavior"> <dataContractSerializer maxItemsInObjectGraph="131072" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="customBinding0" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:02:00"> <binaryMessageEncoding> <readerQuotas maxDepth="64" maxStringContentLength="16384" maxArrayLength="16384" maxBytesPerRead="16384" maxNameTableCharCount="16384" /> </binaryMessageEncoding> <httpTransport /> </binding> </customBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service behaviorConfiguration="AlgoMap.Web.MapService.MapServiceBehavior" name="AlgoMap.Web.MapService.MapService"> <endpoint address="" binding="customBinding" bindingConfiguration="customBinding0" contract="AlgoMap.Web.MapService.MapService" behaviorConfiguration="AlgoMap.Web.MapService.MapServiceEndpointBehavior" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" behaviorConfiguration="AlgoMap.Web.MapService.MapServiceEndpointBehavior" /> </service> </services> </system.serviceModel> Can anyone help?? Thanks!!

    Read the article

  • Launch configuration for an Eclipse RCP project

    - by ekebsi
    I am new to Eclipse and have been asked to make some changes to an RCP project. Now I can not get it to run. Unfortunately I cannot contact the author to get some help. I would appreciate an hint which could lead to the solution. When I lunch the application I get following log message: !ENTRY org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE One or more bundles are not resolved because the following root constraints are not resolved: !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.core.nl1_3.3.0.I20070607.jar was not resolved. !SUBENTRY 2 org.eclipse.team.core.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.core_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.emf.edit.ui.nl1_2.3.0.v200706262000.jar was not resolved. !SUBENTRY 2 org.eclipse.emf.edit.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.emf.edit.ui_[2.0.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.ui.nl1_3.3.0.I20070607.jar was not resolved. !SUBENTRY 2 org.eclipse.team.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.ui_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.rcp.nl1_3.2.0.v20070612.jar was not resolved. !SUBENTRY 2 org.eclipse.rcp.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.rcp_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.ui.win32.nl1_3.2.100.I20070319-0010.jar was not resolved. !SUBENTRY 2 org.eclipse.ui.win32.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.ui.win32_[3.2.0,4.0.0). !ENTRY org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE The following is a complete list of bundles which are not resolved, see the prior log entry for the root cause if it exists: !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.emf.edit.ui.nl1_2.3.0.v200706262000.jar [29] was not resolved. !SUBENTRY 2 org.eclipse.emf.edit.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.emf.edit.ui_[2.0.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.ui.nl1_3.3.0.I20070607.jar [45] was not resolved. !SUBENTRY 2 org.eclipse.team.ui.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.ui_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.ui.win32.nl1_3.2.100.I20070319-0010.jar [66] was not resolved. !SUBENTRY 2 org.eclipse.ui.win32.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.ui.win32_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.team.core.nl1_3.3.0.I20070607.jar [83] was not resolved. !SUBENTRY 2 org.eclipse.team.core.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.team.core_[3.2.0,4.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-11-07 09:03:45.878 !MESSAGE Bundle update@plugins/org.eclipse.rcp.nl1_3.2.0.v20070612.jar [124] was not resolved. !SUBENTRY 2 org.eclipse.rcp.nl1 2 0 2012-11-07 09:03:45.878 !MESSAGE Missing host org.eclipse.rcp_[3.2.0,4.0.0). I am using a windows 7 Enterprise 64 (4GB) and Eclipse Juno 64.

    Read the article

  • JRuby wrong element type class java.lang.String(array contains char) related to JAVA_HOME

    - by Daryl
    I am on Ubuntu x64 bit running: java version "1.6.0_18" OpenJDK Runtime Environment (IcedTea6 1.8) (6b18-1.8-0ubuntu1) OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode) and jruby 1.4.0 (ruby 1.8.7 patchlevel 174) (2010-02-11 6586) (OpenJDK 64-Bit Server VM 1.6.0_18) [amd64-java] I have this code running on my Windows 7 computer at home. I recently copied over my whole folder over to Ubuntu, installed java, jruby, and associated gems but I get this error when I run my main file: jruby run.rb test =================Processing FREDERICKSBURG_1.1======================= ERROR IN TESTING wrong element type class java.lang.String(array contains char) /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' /home/daryl/Desktop/work/Code/geografikos/lib/models/page.rb:103:in `sentences' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 The focus of the error is: ERROR IN TESTING wrong element type class java.lang.String(array contains char) Everything works fine on my windows machine. I figured I was getting this error because I did not have JAVA_HOME set however I added this to bashrc as: export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk and have confirmed: echo $JAVA_HOME /usr/lib/jvm/java-1.6.0-openjdk I can produce a similar error by removing my JAVA_HOME variable on windows: =================Processing FREDERICKSBURG_1.3======================= ERROR IN TESTING cannot convert instance of class org.jruby.RubyString to char C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' C:/work/Code/geografikos/lib/models/page.rb:103:in `sentences' C:/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' C:/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' C:/work/Code/geografikos/lib/statistics.rb:105:in `each' C:/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 It is obviously not exactly the same but I have a feeling this has to do with the java path. You can probably derive from the error that I am just trying to convert a ruby variable to java using to_java. This works fine on my windows machine and I have confirmed the gems are the same but I don't think this has to do with gems. I lied. I changed my JAVA_HOME back on my windows machine and this error still occurs. So now the code doesn't run on either machine. I recently installed git on my windows machine and added the code to a repository. But I haven't really done anything with it. All it said was it will convert all LF to CRLF...That shouldn't change anything though should it? Any ideas on why I am now getting these errors? I haven't changed anything on my windows machine in months except for installing git.

    Read the article

  • Python dictionary key missing

    - by Greg K
    I thought I'd put together a quick script to consolidate the CSS rules I have distributed across multiple CSS files, then I can minify it. I'm new to Python but figured this would be a good exercise to try a new language. My main loop isn't parsing the CSS as I thought it would. I populate a list with selectors parsed from the CSS files to return the CSS rules in order. Later in the script, the list contains an element that is not found in the dictionary. for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules Error encountered: $ css-combine combined.css test1.css test2.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: 'p' Swap the inputs: $ css-combine combined.css test2.css test1.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: '#header_.title' I've done some quirky things in the code like sub spaces for underscores in dictionary key names in case it was an issue - maybe this is a benign precaution? Depending on the order of the inputs, a different key cannot be found in the dictionary. The script: #!/usr/bin/env python import optparse import re class CssParser: def __init__(self): self.file = False self.order = [] # store rules assignment order def parse(self, rules = {}): if self.file == False: raise IOError("No file to parse") selector = False for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules def hasSelector(self, line): return True if re.search("^([#a-z,\.:\s]+){", line) else False def getSelector(self, line): s = re.search("^([#a-z,:\.\s]+){", line).group(1) return "_".join(s.strip().split()) def hasProperty(self, line): return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False def getProperty(self, line): return re.search("([a-z-]+:[^;]+;)", line).group(1) class Consolidator: """Class to consolidate CSS rule attributes""" def run(self, outfile, files): parser = CssParser() rules = {} for file in files: try: parser.file = open(file) rules = parser.parse(rules) except IOError: print "Cannot read file: " + file finally: parser.file.close() self.serialize( [(selector, rules[selector]) for selector in parser.order], outfile ) def serialize(self, rules, outfile): try: f = open(outfile, "w") for rule in rules: f.write( "%s {\n\t%s\n}\n\n" % ( " ".join(rule[0].split("_")), "\n\t".join(rule[1]) ) ) except IOError: print "Cannot write output to: " + outfile finally: f.close() def init(): op = optparse.OptionParser( usage="Usage: %prog [options] <output file> <stylesheet1> " + "<stylesheet2> ... <stylesheetN>", description="Combine CSS rules spread across multiple " + "stylesheets into a single file" ) opts, args = op.parse_args() if len(args) < 3: if len(args) == 1: print "Error: No input files specified.\n" elif len(args) == 2: print "Error: One input file specified, nothing to combine.\n" op.print_help(); exit(-1) return [opts, args] if __name__ == '__main__': opts, args = init() outfile, stylesheets = [args[0], args[1:]] c = Consolidator() c.run(outfile, stylesheets) Test CSS file 1: body { background-color: #e7e7e7; } p { margin: 1em 0em; } File 2: body { font-size: 16px; } #header .title { font-family: Tahoma, Geneva, sans-serif; font-size: 1.9em; } #header .title a, #header .title a:hover { color: #f5f5f5; border-bottom: none; text-shadow: 2px 2px 3px rgba(0, 0, 0, 1); } Thanks in advance.

    Read the article

  • Compiling ruby1.9.1 hangs and fills swap!

    - by nfm
    I'm compiling Ruby 1.9.1-p376 under Ubuntu 8.04 server LTS (64-bit), by doing the following: $ ./configure $ make $ sudo make install ./configure works without complaints. make hangs indefinitely until all my RAM and swap is gone. It get stuck after the following output: compiling ripper make[1]: Entering directory `/tmp/ruby1.9.1/ruby-1.9.1-p376/ext/ripper' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O2 -g -Wall -Wno-parentheses -o ripper.o -c ripper.c If I run the gcc command by hand, with the -v argument to get verbose output, it hangs after the following: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) /usr/lib/gcc/x86_64-linux-gnu/4.2.4/cc1 -quiet -v -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H="extconf.h" ripper.c -quiet -dumpbase ripper.c -mtune=generic -auxbase-strip ripper.o -g -O2 -Wall -Wno-parentheses -version -fPIC -fstack-protector -fstack-protector -o /tmp/ccRzHvYH.s ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/4.2.4/../../../../x86_64-linux-gnu/include" ignoring nonexistent directory "/usr/include/x86_64-linux-gnu" ignoring duplicate directory "../.././ext/ripper" ignoring duplicate directory "../../." #include "..." search starts here: #include <...> search starts here: . ../../.ext/include/x86_64-linux ../.././include ../.. /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.2.4/include /usr/include End of search list. GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) (x86_64-linux-gnu) compiled by GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4). GGC heuristics: --param ggc-min-expand=47 --param ggc-min-heapsize=32795 Compiler executable checksum: 6e11fa7ca85fc28646173a91f2be2ea3 I just compiled ruby on another computer for reference, and it took about 10 seconds to print the following output (after the above Compiler executable checksum line): COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' as -V -Qy -o ripper.o /tmp/cca4fa7R.s GNU assembler version 2.20 (i486-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.20 COMPILER_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' I have absolutely no clue what could be going wrong here - any ideas where I should start?

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >