Search Results

Search found 21838 results on 874 pages for 'long double'.

Page 476/874 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Apple Automator "New PDF from Images" maintaining same filename

    - by mech
    I will potentially have 26k of old legacy PICT images to transfer first to PDF for migration. I am using Apple Automator and also the "Dispense Items Incrementally" to loop through it. However, I can't seem to let "New PDF from Images" to remember the original filename. Anyone able to offer some advice :) FYI, I am transforming it to PDF because I can't do it using ImageMagick to convert directly to my ultimate JPEG format. Due to the fact that my PICT was created very long ago and thus has some convert: improper image header error. See this ticket for more information. Thus I am doing a intermediate convert PICT to PDF first, then convert that PDF to JPEG :) The only thing left is the naming of the "Output File Name" which do not allow me to identify original filename. See the screen here:

    Read the article

  • Why is my ipad's wireless so flakey?

    - by Mark
    I'm the proud owner of a new IPad here in the UK. All is good, except for the wifi, which is a bit flakey. It connects fine to my Draytek router which is set for WPA/WPA2 and 56g only, displaying full signal strength. Then, after a few minutes, it goes down to minimum strength... And sometimes it goes back up again. A few times, it seems to loose connection completely, and needs to be turned off and on again. I've looked at the Apple support site, and have tried their recommendations (which are not really very relevant), but still nothing. I've tried setting the router to wpa2 only, and setting long-preamble. Right now, I guess I want to know if it's a hardware problem with my device and should be returned, or if it's a problem with all ipads which will be resolved. Guess I could take it back to the Mac genius bar, but I find those guys so incredibly pretentious and, frankly, rather useless, that i'd rather wait until I've exercised other options!

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • How to do a hexdump of first track of HDD?

    - by Daniel Gratz
    How would i do a hexdump in Ubuntu for the first track of a HDD? I am looking for a winhex-esque output if that makes sense. The first track has 63 sectors, each 512 bytes long. I tried dd if=/dev/sda bs=1 count=512 | hexdump -C but that only gave me what appears to be the MBR, or first sector of the HDD. I guess i am confused about what bs and count should be. Bs means how many bytes to display and count is how many multiples of bs? Thanks!

    Read the article

  • Easy shorewall question : allow ips to DNAT

    - by llazzaro
    Hello, At my home network I had a transparent proxy. This is the rule that forward all 80 traffic to my squid3.1 server at DMZ DNAT loc:!10.0.0.126 dmz:172.16.0.198:3128 tcp 80 - !172.16.0.198 Ok, I need to add more ips to avoid transparent proxy. I tried loc:!10.0.0.134,!10.0.0.126...but didnt work (also similars like [ip0,ip1]. I tried to google the answer cant find it (sorry no matches, not searching the right keywords) also I tried to read the docs, but they are really long (and indexes dont help me). Thanks!

    Read the article

  • Two equal items in alternatives list

    - by Red Planet
    I want to have two JDKs. The first one was installed a long time ago to /usr/lib/jvm/java-7-oracle/. I installed the second version and executed following commands to add it to alternatives: red-planet@laptop:~$ sudo update-alternatives --install "/usr/bin/java" "java" "/opt/java_1.6.0_35/bin/java" 2 update-alternatives: using /opt/java_1.6.0_35/bin/java to provide /usr/bin/java (java) in auto mode. red-planet@laptop:~$ sudo update-alternatives --install "/usr/bin/javac" "javac" "/opt/java_1.6.0_35/bin/javac" 2 update-alternatives: using /opt/java_1.6.0_35/bin/javac to provide /usr/bin/javac (javac) in auto mode. red-planet@laptop:~$ sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/opt/java_1.6.0_35/bin/javaws" 2 update-alternatives: using /opt/java_1.6.0_35/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode. And configured: There are 2 choices for the alternative java (providing /usr/bin/java). Selection Path Priority Status ------------------------------------------------------------ * 0 /opt/java_1.6.0_35/bin/java 2 auto mode 1 /opt/java_1.6.0_35/bin/java 2 manual mode 2 /usr/lib/jvm/java-7-oracle/bin/java 1 manual mode Press enter to keep the current choice[*], or type selection number: Why do I have two equal items in the list?

    Read the article

  • Most efficient Way to setup a game server

    - by alex bowers
    I'm running a PHP based game which has over 45 Million members predicted for end of this year (2011) Currently we are on 7.5 Million, this game is being ran on facebook and I am in desperate need to help get this game server as efficient and as powerful as possible. it is a dedicated server with Processor Manufacturer Intel Model i7 920 Frequency 4x 2x 2.66 GHz NIC GigaEthernet RAM 12 GB Hard disk 4 x 1 TB specs. It has apache installed, cPanel, phpMyAdmin, several apache mods and MySQL. The game also runs 47 mysql calls per second per user. Is there any alternatives to the above which could be faster, more efficient etc? I dont mind having to recode the game to fit to it, as long as it maximises our upper limit of members on the game. Thanks Also, is there a way to tell what our maximum limit to players, database calls etc is? Thank you again, hope you guys can help :)

    Read the article

  • FTP error 425 failed to establish connection

    - by cKK
    Getting "ftp error 425 failed to establish connection" when trying to connect to ftp server. Tried 2 ftp clients on 3 machines on same network and none work. However FTP works from home / mobile broadband. No ip blocks on ftp sever. Other ftp servers(differrent ip/hosts) work okay. firewall setup correct, no ports blocked. Is it possible to use a proxy for ftp a i think it's something with the ISP but taking too long to fix?

    Read the article

  • iTunes command+tab does not bring main window the foreground after command+h

    - by ecoffey
    Related to : Cmd-Tab does not bring iTunes to foreground There people claim that command+h will continue to work, but the behavior I'm seeing is: Launch a full screen Terminal instance (or anything else really) Launch a fresh iTunes command+h to hide it command+tab back to iTunes, this works and the main app window is given focus command+tab to Terminal command+tab to iTunes What I expect to happen: the main iTunes window is brought to the foreground and given focus. What does happen: The application menu bar shows iTunes, but an additional command+` is needed to bring the main app to foreground. I thought it might have been related to a Chrome interaction, since I'm usually commmand+tabbing between my browser and everything else, but that does not seem to be the case. I do have two "plugins" running: last.fm scrobbler Alfred app mini itunes player interface Not sure if either of those create some phantom window or something that is mixing something up. Versions of stuff: OS X: 10.6.5 iTunes: 10.1.1 (4) last.fm: can't find it, but it's recent Alfred: 0.8 (89) So after that big long rant, anyone else seeing this behavior?

    Read the article

  • Is your TRY worth catching?

    - by Maria Zakourdaev
      A very useful error handling TRY/CATCH construct is widely used to catch all execution errors  that do not close the database connection. The biggest downside is that in the case of multiple errors the TRY/CATCH mechanism will only catch the last error. An example of this can be seen during a standard restore operation. In this example I attempt to perform a restore from a file that no longer exists. Two errors are being fired: 3201 and 3013: Assuming that we are using the TRY and CATCH construct, the ERROR_MESSAGE() function will catch the last message only: To workaround this problem you can prepare a temporary table that will receive the statement output. Execute the statement inside the xp_cmdshell stored procedure, connect back to the SQL Server using the command line utility sqlcmd and redirect it's output into the previously created temp table.  After receiving the output, you will need to parse it to understand whether the statement has finished successfully or failed. It’s quite easy to accomplish as long as you know which statement was executed. In the case of generic executions you can query the output table and search for words like“Msg%Level%State%” that are usually a part of the error message.Furthermore, you don’t need TRY/CATCH in the above workaround, since the xp_cmdshell procedure always finishes successfully and you can decide whether to fire the RAISERROR statement or not. Yours, Maria

    Read the article

  • Multi sim GSM modem or alternative

    - by Ando
    I'm trying to administer the SMS trafic of my businesss centrally through a web portal. In Europe (except UK) we don't have a numbers/SMS trafic provider like Twilio or Clickatell, nor any build in way to administer the SMS traffic for a number via http, so I will have to buy the long numbers and administer the SMS traffic myself. For this I was looking into a hardware solution for hosting all my SIM cards - I have like 400 sims cards (= numbers). I saw that GSM modems might fit in but they don't seem to scale up very well. Could you recommend me a GSM modem? If this is not the best way to approach this, what would my alternatives be? Thanks in advance

    Read the article

  • FreeBSD Ports: How can I see all dependencies for a port, and all subdependencies for those dependencies?

    - by Stefan Lasiewski
    I'm trying to build a port which depends on apache-ant. I thought I could run make build-depends-list to see all dependencies required by this port: # make build-depends-list /usr/ports/devel/apache-ant /usr/ports/java/jdk16 /usr/ports/math/gmp But after installing everything, the port had a dependency list which was a mile long: apache-ant-1.8.1 desktop-file-utils-0.15_2 gamin-0.1.10_4 gettext-0.18.1.1 gio-fam-backend-2.26.1 glib-2.26.1_1 gmp-5.0.1 inputproto-2.0 javavmwrapper-2.3.5 kbproto-1.0.4 libX11-1.3.3_1,1 libXau-1.0.5 libXdmcp-1.0.3 libXext-1.1.1,1 libXi-1.3,1 libXtst-1.1.0 libiconv-1.13.1_1 libpthread-stubs-0.3_3 libxcb-1.7 pcre-8.12 perl-5.10.1_3 pkg-config-0.25_1 python26-2.6.6 recordproto-1.14 unzip-6.0 xextproto-7.1.1 xproto How can I see all dependencies, and all subdependencies for a port?

    Read the article

  • The ghost of icons past

    - by Kenneth Cochran
    I am fortnightly( or an approximation thereof) visited by the ghosts of icons, long departed. These apparitions only reveal themselves momentarily after I've logged in then vanish. Returning from whence they came. I've investigated their history and found no clues as to why they continue to haunt me. I sought out an exorcist but those I found were only qualified to expel spirits of humans and demons. Not one had any experience with digital poltergeists. Perhaps praying to Saint William of Redmond will improve the Vista before me. Dost thou agree?

    Read the article

  • Tips and tricks to make NX server more stable

    - by gareth_bowles
    My shop has been using the FreeNX server on Fedora 11 for a while now and mostly getting good results, especially with performance, but we have some annoying problems with client connections. There are two main issues: Client sessions sometimes freeze after a long time (seems to be at least 2 hours of having the session active) We often have to make multiple attempts to start a new client session, especially if a previous session was suspended rather than terminated. In qwuite a few cases, we've had to restart the NX server to get around this. Our NX server configuration is the default except that we've enabled logging level 7 to /var/log/nxserver.log, and set the font server to "unix:/7100" so that it uses xfs. Does anyone have any ideas for making things more stable ?

    Read the article

  • MacBook Pro shutting down instantly after attempt to update firmware

    - by Luke Dennis
    I tried to apply a firmware update on a MacBook Pro 2.4ghz (2008), and after rebooting, the fans kicked up to maximum, the screen lit up, and then it immediately died. Now when I try to boot, it does the same thing: the fans crank to max, the screen lights up, and then it dies after about 2 seconds. Resetting the power manager does nothing. It doesn't stay up long enough to choose another boot drive or boot from CD. I have no idea what else to try. Help?

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • software architecture (OO design) refresher course

    - by PeterT
    I am lead developer and team lead in a small RAD team. Deadlines are tight and we have to release often, which we do, and this is what keep the business happy. While we (the development team) are trying to maintain the quality of the code (clean and short methods), I can't help but notice that the overall quality of the OO design&architecture is getting worse over the time - the library we are working on is gradually reducing itself to a "bag of functions". Well, we try to use the design patterns, but since we don't really have much time for a design as such we are mostly using the creational ones. I have read Code Complete / Design Patterns (GOF & enterprise) / Progmatic Programmer / and many books from Effective XXX series. Should I re-read them again as I have read them a long time ago and forgotten quite a lot, or there are other / better OO design / software architeture books been published since then which I should definitely read? Any ideas, recommendations on how can I get the situation under control and start improving the architecture. The way I see it - I will start improving the architectural / design quality of software components I am working on and then will start helping other team members once I find what is working for me.

    Read the article

  • Podcast Show Notes: By Any Other Name: Governance and Architecture

    - by Bob Rhubart
    The OTN ArchBeat Podcast returns from a brief summer hiatus with a three-part conversation about IT architecture and governance. My guests for this conversation are Eric Stephens , an Oracle Enterprise architect and a frequent guest on this program. Joining Eric on the panel is Tim Hall , Senior Director of product management for the Oracle Enterprise Repository, Oracle Service Registry, and Oracle Application Integration Architecture. Tim made his first appearance on ArchBeat as panelist on the recent program featuring Thomas Erl. The Conversation Listen to Part 1:Why it's important to revive the dormant conversation about IT governance. Listen to Part 2 (Sept 19): Balancing functional, technical, operational requirements to meet the challenge of defining appropriate governance "guardrails." Listen to Part 3 (Sept 26): Bringing IT architecture out of the ivory tower to make governance a less intimidating, more collaborative process. Additional Resources Leveraging Governance to Sustain Enterprise Architecture Efforts, an Oracle white paper by Eric Stephens. SOA, Cloud, and Service Technologies, a transcript of an ArchBeat interview with Thomas Erl, Tim Hall, and Demed L'Her, in which Tim says the following about governance: "For a long time people have argued that SOA governance is sort of an awkward name, no one wanted to be audited. There's 50% of the world that think, yes, we're going to have to tops down initiative to address this and there's 50% of the world that says that it feels like a heavy weight process that I want no part of. So what I think we should do is change the name…"

    Read the article

  • SQL Server 2008 R2 100% availability

    - by Mark Henderson
    Is there any way to provide 100% uptime on SQL Server 2008 R2? From my experience, the downtimes for the different replication methods are: Log Shipping: Lots (for DR only) Mirroring w. NLB: ~ 45 seconds Clustering: ~ 5-15 seconds And all of these solutions involve all of the connections being dropped from the source, so if the downtime is too long or the app's gateway doesn't support reconnection in the middle of task, then you're out of luck. The only way I can think to get around this is to abstract the clustering a level (by virtualising and then enabling VMWare FT. Yuck. Good luck getting this to work on a quad-socket, 32-core system anyway.). Is there any other way of providing 100% uptime of SQL Server?

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • 503 service unavailable when debugging PHP script in Zend Studio

    - by user25932
    I have a web server with apache 2.0 installed. It comes with Zend Server install pack. When I’m trying to debug my php files apache serves a blank page with 503 service unavailable. Of course slow server-side code is tying up Apache requests for far too long, but I need it to wait, until my debugging comes to end. When I call to the page from a browser it launches ZendStudio debugging my PHP script (request redirects Zend Debugger module). I debug through my script and if I finish debugging in 120 seconds, I normally return to the browser. When it takes more than 120 seconds the browser displays '503 service unavailable' and I can't return to page output. I have even forced 'max_execution_time = 300' 'max_input_time = 600' in php.ini and 'TimeOut = 500' in httpd.conf. No matter whether it is Opera, IE or Firefox. I spent two days googling it, no right answer until now.

    Read the article

  • Splitting big request in multiple small ajax requests

    - by Ionut
    I am unsure regarding the scalability of the following model. I have no experience at all with large systems, big number of requests and so on but I'm trying to build some features considering scalability first. In my scenario there is a user page which contains data for: User's details (name, location, workplace ...) User's activity (blog posts, comments...) User statistics (rating, number of friends...) In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests. This is why I decided to run only step 1 (User's details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers. For me at least this makes more sense. However there are 3 requests instead of one for every user page view. Will this affect the system on the long term? I'm assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?

    Read the article

  • Removing specific part of filename (what's after the second dash) for all files in a folder

    - by Bodo
    I use the command line utility youtube-dl to download videos from YouTube and make mp3s from them with avconv. I'm doing this under Ubuntu 14.04 and very happy with it. The utility downloads the files and saves them with the following name scheme: TITLE(artist-track)-ID.mp3 So an actual filename looks like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Some other file names in the folder look like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 Stromae - Papaoutai-oiKj0Z_Xnjc.mp3 At first, this was no problem. It didn't bother me while listening to my music in Rhytmbox. But when moving to phone or other devices it is pretty confusing to see a so long name, and some players, like the Samsung ones, treat that last part (id after second dash) of the name as Album or something. I'd like to create a bash script that removes what's after the second dash in the name for all files, so it'll make them like this: From: Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 To: Martin Garrix - Animals (Official Video).mp3 Is it also possible to instruct youtube-dl to exclude the ID from now on? I am currently downloading with the command: youtube-dl --extract-audio --audio-quality 0 --audio-format mp3 URL

    Read the article

  • Checking if an object is inside bounds of an isometric chunk

    - by gopgop
    How would I check if an object is inside the bounds of an isometric chunk? for example I have a player and I want to check if its inside the bounds of this isometric chunk. I draw the isometric chunk's tiles using OpenGL Quads. My first try was checking in a square pattern kind of thing: e = object; this = isometric chunk; if (e.getLocation().getX() < this.getLocation().getX()+World.CHUNK_WIDTH*World.TILE_WIDTH && e.getLocation().getX() > this.getLocation().getX()) { if (e.getLocation().getY() > this.getLocation().getY() && e.getLocation().getY() < this.getLocation().getY()+World.CHUNK_HEIGHT*World.TILE_HEIGHT) { return true; } } return false; What happens here is that it checks in a SQUARE around the chunk so not the real isometric bounds. Image example: (THE RED IS WHERE THE PROGRAM CHECKS THE BOUNDS) What I have now: Desired check: Ultimately I want to do the same for each tile in the chunk. EXTRA INFO: Till now what I had in my game is you could only move tile by tile but now I want them to move freely but I still need them to have a tile location so no matter where they are on the tile their tile location will be that certain tile. then when they are inside a different tile's bounding box then their tile location becomes the new tile. Same thing goes with chunks. the player does have an area but the area does not matter in this case. and as long as the X and Y are inside the bounding box then it should return true. they don't have to be completely on the tile.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >