Search Results

Search found 20878 results on 836 pages for 'lint check'.

Page 107/836 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Making Cisco WebEx work with 13.10 Saucy 64-bit

    - by Russ Lowenthal
    I was having a very hard time getting webex to work under Saucy. Up until now I've been able to just install a java plugin, install ia32-libs, and I was good to go. With Saucy ia32-libs is gone and it's up to us to figure out which 32-bit libraries we need to install. I struggled with this for a few days trying blindly to install this and that until I found a way to get exactly what I need. I got the clue I needed from this post: http://blogs.kde.org/2013/02/05/ot-how-get-webex-working-suse-linux-122-64bit#comment-9534 and for anyone who wants it, here is a step-by-step method to follow that works every time (so far) ***Install JDK and configure java plugin for browser. No need for a 32-bit JDK or Firefox ***Try to start a webex. This will create $HOME/.webex/1324/ ***Check those .so libraries for unresolved dependencies by running ldd against them. For example: ldd $HOME/.webex/1324/*.so >>check.txt Look in check.txt for anything that is not found. For example, I found: > libdbr.so: > linux-gate.so.1 => (0xf7742000) > libjawt.so => not found > libX11.so.6 => /usr/lib/i386-linux-gnu/libX11.so.6 (0xf75e6000) > libXmu.so.6 => not found > libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf75e0000)* ***Find what packages provide that file by installing apt-file with: sudo apt-get install apt-file apt-file update note: apt-file update will take a while, go get a cup of tea then locate which package contains your missing libraries with: apt-file search libXmu.so.6 apt-file search libjawt.so ***and fix it using: apt-get install -y libxmu6:i386 apt-get install -y libgcj12-awt:i386

    Read the article

  • Incrementing Assembly Version in TFS Builds and its affect over Other Build Definitions

    - by ssmantha
    A very common scenario while performing TFS builds is to increment version number of the assemblies. There are quite a few approaches of which I would like to share two links: Ewald Hofman’s Approach: http://www.ewaldhofman.nl/post/2010/05/13/Customize-Team-Build-2010-e28093-Part-5-Increase-AssemblyVersion.aspx#id_02e7b082-ce95-49a9-92e9-7dc88887b377 Richard Bank’s Approach : http://www.richard-banks.org/2010/07/how-to-versioning-builds-with-tfs-2010.html   Both these approaches work well, however there are scenarios where Editing and Checking–in the Assembly version information can create problems with Build Definitions meant for Continuous Integration, or gated Check-ins. You can suppress the Continuous Integration Builds while checking in the Assembly info file by just putting a comment “***NO_CI***” as specified by Ewald in his blog. However, if you have Gated Checkin in place, this can turn out to be difficult to suppress, I myself tried to suppress the Build Trigger during the check in process but things doesn’t turn out well. That’s where Richard’s solution comes as handy. Both the solutions have their own pros and cons, which I believe can only be experienced over a period of time. In case of Richard’s solution I believe that we don’t have any history of the Assembly Version Info file and when you take latest of the solution the information will be lost. If you notice closely, that suppressing the Continuous Integration (the NO_CI approach in check in comments) is a workaround provided by Microsoft, however I didn’t find anything to suppress the gated Checkin so far. Suggestions or Findings are most welcome.

    Read the article

  • BSOD error on Win7 ultimate

    - by jim
    BUG CHECK STRING = DRIVER_POWER_STATE_FAILURE BUG CHECK CODE = 0x0000009f PARAMETER 1 = 0x00000003 PARAMETER 2 = 0x85f05420 PARAMETER 3 = 0x82962ae0 PARAMETER 4 = 0x85aa2ab8 CAUSED BY DRIVER = ntkrnlpa.exe CAUSED BY ADDRESS = ntkrnlpa.exe+dcd10 FILE DESCRIPTION = NT Kernel & System Microsoft® Windows® Operating System Microsoft Corporation 6.1.7600.16385 (win7_rtm.090713-1255) 32-bit C:\Windows\minidump\111109-22198-01.dmp This is the BSOD mini dump that I have been getting recently. I was wondering if anyone had any ideas what would cause this. I just did a clean install of win7 ultimate 32 bit. I don't do a lot with this pc. Mostly surf the web and that's when it happens. My only guess is that the power supply is failing and I need a new one. I wanted to check here and see if anyone else had some other idea before I bought a new one. Thank you for any help you can give me.

    Read the article

  • TFS Build Server not finding 'Microsoft.Expression.Interactions&rsquo;

    - by Chris Skardon
    We’ve been trying to get the build server to pick up the Microsoft.Expression.Interactions.dll needed so we can use things like the ExtendedVisualStateManager and the DataStateBehavior. Adding the DataStateBehavior in Blend adds the reference to the project, so it all compiles fine on the local machine. Checking in the code into a CI server throws up some ugliness: d:\Builds\6\Source\MyFile.xaml (290): The tag 'DataStateBehavior' does not exist in XML namespace 'http://schemas.microsoft.com/expression/2010/interactions'. Errr, it should do…?? The reference is there, so… ahhhhh! A quick check of the properties and we’re using the dll from the c:\program files (x86)\ location, no wonder the build server can’t find it – let’s add the dll into our (ever expanding) ‘lib’ folder, reference that version, and check that bad boy in… No. Still No. Still get the same error. What the??? The reference is still pointing to the program files location?? Ok, so let’s modify the csproj file using the wonderful notepad, changing the reference from <Reference Include="Microsoft.Expression.Interactions, /* loadsa shizzle here */ /> to <Reference Include="Microsoft.Expression.Interactions">     <HintPath>..\Lib\Microsoft.Expression.Interactions.dll</HintPath> </Reference> Check that in, aaaand… Success!!! The reason for this (from what I can gather from here) is that we’re using the Productivity Power Tools, and they try to be clever about referencing dlls, and changed what we’d asked (i.e. for the local version) to use the original program files location.. Editing the file in notepad (sweet sweet notepad) gets around this issue… Irritating, took a while to figure this out… Meh :)

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Got Bus error (core dumped) when run the nova-compute from nova user

    - by Lafada
    I got error when I try to restart the nova-compute service on my compute node. [root@mycompute ~]# service nova-compute restart Stopping OpenStack Nova Compute Worker: [ OK ] Starting OpenStack Nova Compute Worker: [FAILED] I check in the log file /var/log/nova/nova-compute.log but nothing is logged in that file. Then I try to run the command with nova user. [root@mycompute ~]# su -s/bin/bash nova bash-4.1$ /usr/bin/nova-compute Bus error (core dumped) I check for this error on net but didn't get the link to where to check for this error.

    Read the article

  • Failed to load viewstate.The control tree into which viewstate is being loaded...etc

    - by alaa9jo
    Two days ago,a colleague of mine tried to publish an asp.net website (which is built in VS2008 using framework 3.5) to our server,he configured everything in IIS (he made sure that the selected asp.net version is 2.0) and launched the website..at first it was working great but when he tried to click on a specific treeview...BOOM..: "Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request." In that page there were these control: a TreeView and a Placeholder,when the user selects any node then it's controls will be created dynamically into that placeholder..for the first time it's working fine but when (s)he select another node then that issue appears. He called me to help him with this issue,for me this is the first time I see such an issue,scratch my head then I decided to eliminate the possibilities of this issue one by one,at the development machine it's working perfectly,he published the website at the local IIS and again..it's working perfectly,I took a copy of the website and published it into my laptop but no issues at all,so this is means that it's not an issue in the code. So there is something missing/wrong in our server [it has Windows Server 2003],we went to the server and checked on the web-config and the configurations on IIS...nothing wrong so far,so I decided to check if the framework 3.5 is installed or not and the answer: it wasn't installed Of course he assumed that it was installed and there was nothing to tell if it wasn't from the "ASP.Net version" in IIS because frameworks 3.0 and 3.5 will not be listed there [2.0 will be listed there instead],the only way to check if it was installed or not is to search for the framework in this path:[WINDOWS Folder]\Microsoft.NET\Framework or check if it was installed in Add or remove programs. The obvious solution for his case: We installed Framework 3.5 SP1 into our server,did a restart to the machine and it worked ! If anyone faced the same issue and solved it using the same solution or with a different one please post it here to share experience.

    Read the article

  • Stop Zabbix notification for nodes under zabbix-proxy when proxy service is down

    - by A_01
    I have a zabbix-proxy and 12 nodes in that proxy. Right now whenever proxy service goes down. It send out of reach mail for all the 12 nodes. I want to send mail only for the zabbix proxy not for the nodes under that proxy Updated: Now I am trying to have a single trigger in which I want to check both the conditions like 1-check zabbix-host is not accessble from past x minutes. 2-check the host is not giving any data to the proxy(Host is down). Not the trigger should start shouting onle when we have condition in which proxy is running and node is down. I tried the below but its not working for me. Can some please help me out in this ({ip-10-4-1-17.ec2.internal:agent.ping.nodata(2m)}=1) & ({ip-10-4-1- 17.ec2.internal:zabbix[proxy,zabbixproxy.dev-test.com,lastaccess].fu??zzytime(120)}=1)

    Read the article

  • NHibernate Conventions

    - by Ricardo Peres
    Introduction It seems that nowadays everyone loves conventions! Not the ones that you go to, but the ones that you use, that is! It just happens that NHibernate also supports conventions, and we’ll see exactly how. Conventions in NHibernate are supported in two ways: Naming of tables and columns when not explicitly indicated in the mappings; Full domain mapping. Naming of Tables and Columns Since always NHibernate has supported the concept of a naming strategy. A naming strategy in NHibernate converts class and property names to table and column names and vice-versa, when a name is not explicitly supplied. In concrete, it must be a realization of the NHibernate.Cfg.INamingStrategy interface, of which NHibernate includes two implementations: DefaultNamingStrategy: the default implementation, where each column and table are mapped to identically named properties and classes, for example, “MyEntity” will translate to “MyEntity”; ImprovedNamingStrategy: underscores (_) are used to separate Pascal-cased fragments, for example, entity “MyEntity” will be mapped to a “my_entity” table. The naming strategy can be defined at configuration level (the Configuration instance) by calling the SetNamingStrategy method: 1: cfg.SetNamingStrategy(ImprovedNamingStrategy.Instance); Both the DefaultNamingStrategy and the ImprovedNamingStrategy classes offer singleton instances in the form of Instance static fields. DefaultNamingStrategy is the one NHibernate uses, if you don’t specify one. Domain Mapping In mapping by code, we have the choice of relying on conventions to do the mapping automatically. This means a class will inspect our classes and decide how they will relate to the database objects. The class that handles conventions is NHibernate.Mapping.ByCode.ConventionModelMapper, a specialization of the base by code mapper, NHibernate.Mapping.ByCode.ModelMapper. The ModelMapper relies on an internal SimpleModelInspector to help it decide what and how to map, but the mapper lets you override its decisions.  You apply code conventions like this: 1: //pick the types that you want to map 2: IEnumerable<Type> types = Assembly.GetExecutingAssembly().GetExportedTypes(); 3:  4: //conventions based mapper 5: ConventionModelMapper mapper = new ConventionModelMapper(); 6:  7: HbmMapping mapping = mapper.CompileMappingFor(types); 8:  9: //the one and only configuration instance 10: Configuration cfg = ...; 11: cfg.AddMapping(mapping); This is a very simple example, it lacks, at least, the id generation strategy, which you can add by adding an event handler like this: 1: mapper.BeforeMapClass += (IModelInspector modelInspector, Type type, IClassAttributesMapper classCustomizer) => 2: { 3: classCustomizer.Id(x => 4: { 5: //set the hilo generator 6: x.Generator(Generators.HighLow); 7: }); 8: }; The mapper will fire events like this whenever it needs to get information about what to do. And basically this is all it takes to automatically map your domain! It will correctly configure many-to-one and one-to-many relations, choosing bags or sets depending on your collections, will get the table and column names from the naming strategy we saw earlier and will apply the usual defaults to all properties, such as laziness and fetch mode. However, there is at least one thing missing: many-to-many relations. The conventional mapper doesn’t know how to find and configure them, which is a pity, but, alas, not difficult to overcome. To start, for my projects, I have this rule: each entity exposes a public property of type ISet<T> where T is, of course, the type of the other endpoint entity. Extensible as it is, NHibernate lets me implement this very easily: 1: mapper.IsOneToMany((MemberInfo member, Boolean isLikely) => 2: { 3: Type sourceType = member.DeclaringType; 4: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 5:  6: //check if the property is of a generic collection type 7: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 8: { 9: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 10:  11: //check if the type of the generic collection property is an entity 12: if (mapper.ModelInspector.IsEntity(destinationEntityType) == true) 13: { 14: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 15: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 16:  17: if (collectionInDestinationType != null) 18: { 19: return (false); 20: } 21: } 22: } 23:  24: return (true); 25: }); 26:  27: mapper.IsManyToMany((MemberInfo member, Boolean isLikely) => 28: { 29: //a relation is many to many if it isn't one to many 30: Boolean isOneToMany = mapper.ModelInspector.IsOneToMany(member); 31: return (!isOneToMany); 32: }); 33:  34: mapper.BeforeMapManyToMany += (IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) => 35: { 36: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 37: //set the mapping table column names from each source entity name plus the _Id sufix 38: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 39: }; 40:  41: mapper.BeforeMapSet += (IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) => 42: { 43: if (modelInspector.IsManyToMany(member.LocalMember) == true) 44: { 45: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 46:  47: Type sourceType = member.LocalMember.DeclaringType; 48: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 49: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 50:  51: //set inverse on the relation of the alphabetically first entity name 52: propertyCustomizer.Inverse(sourceType.Name == names.First()); 53: //set mapping table name from the entity names in alphabetical order 54: propertyCustomizer.Table(String.Join("_", names)); 55: } 56: }; We have to understand how the conventions mapper thinks: For each collection of entities found, it will ask the mapper if it is a one-to-many; in our case, if the collection is a generic one that has an entity as its generic parameter, and the generic parameter type has a similar collection, then it is not a one-to-many; Next, the mapper will ask if the collection that it now knows is not a one-to-many is a many-to-many; Before a set is mapped, if it corresponds to a many-to-many, we set its mapping table. Now, this is tricky: because we have no way to maintain state, we sort the names of the two endpoint entities and we combine them with a “_”; for the first alphabetical entity, we set its relation to inverse – remember, on a many-to-many relation, only one endpoint must be marked as inverse; finally, we set the column name as the name of the entity with an “_Id” suffix; Before the many-to-many relation is processed, we set the column name as the name of the other endpoint entity with the “_Id” suffix, as we did for the set. And that’s it. With these rules, NHibernate will now happily find and configure many-to-many relations, as well as all the others. You can wrap this in a new conventions mapper class, so that it is more easily reusable: 1: public class ManyToManyConventionModelMapper : ConventionModelMapper 2: { 3: public ManyToManyConventionModelMapper() 4: { 5: base.IsOneToMany((MemberInfo member, Boolean isLikely) => 6: { 7: return (this.IsOneToMany(member, isLikely)); 8: }); 9:  10: base.IsManyToMany((MemberInfo member, Boolean isLikely) => 11: { 12: return (this.IsManyToMany(member, isLikely)); 13: }); 14:  15: base.BeforeMapManyToMany += this.BeforeMapManyToMany; 16: base.BeforeMapSet += this.BeforeMapSet; 17: } 18:  19: protected virtual Boolean IsManyToMany(MemberInfo member, Boolean isLikely) 20: { 21: //a relation is many to many if it isn't one to many 22: Boolean isOneToMany = this.ModelInspector.IsOneToMany(member); 23: return (!isOneToMany); 24: } 25:  26: protected virtual Boolean IsOneToMany(MemberInfo member, Boolean isLikely) 27: { 28: Type sourceType = member.DeclaringType; 29: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 30:  31: //check if the property is of a generic collection type 32: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 33: { 34: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 35:  36: //check if the type of the generic collection property is an entity 37: if (this.ModelInspector.IsEntity(destinationEntityType) == true) 38: { 39: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 40: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 41:  42: if (collectionInDestinationType != null) 43: { 44: return (false); 45: } 46: } 47: } 48:  49: return (true); 50: } 51:  52: protected virtual new void BeforeMapManyToMany(IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) 53: { 54: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 55: //set the mapping table column names from each source entity name plus the _Id sufix 56: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 57: } 58:  59: protected virtual new void BeforeMapSet(IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) 60: { 61: if (modelInspector.IsManyToMany(member.LocalMember) == true) 62: { 63: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 64:  65: Type sourceType = member.LocalMember.DeclaringType; 66: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 67: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 68:  69: //set inverse on the relation of the alphabetically first entity name 70: propertyCustomizer.Inverse(sourceType.Name == names.First()); 71: //set mapping table name from the entity names in alphabetical order 72: propertyCustomizer.Table(String.Join("_", names)); 73: } 74: } 75: } Conclusion Of course, there is much more to mapping than this, I suggest you look at all the events and functions offered by the ModelMapper to see where you can hook for making it behave the way you want. If you need any help, just let me know!

    Read the article

  • The visual effects in Windows 7 turns off after reboot.

    - by Jagannath
    I have Windows 7 64-bit Professional on my PC. When using English language, the visual effects are turned on. When I use Telugu LIP (India), the visual effects are turned off every time I reboot my PC. When I say visual effects, I don't mean the aero effects. The maximize and minimize effects are not felt. The effects don't show up though the check boxes are checked. Find the image showing the check boxes are checked but the effects not being showed up p. UPDATE: I recreated the account and now the effects are restored properly. UPDATE: Yes it is. This is a standard account. The surprising thing is,the check boxes are checked but still no animations. Once I make the apply button enable, the animations show up again.

    Read the article

  • sendmail on Ubuntu won't send from www-data user

    - by bumperbox
    I if call mail() function in PHP from webserver (running as www-data) i get an error sending email. If i call the same script from the cmdline logged in as root, then it works If i switch user to www-data and run from the cmdline i get this error message WARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) can not chdir(/var/spool/mqueue-client/): Permission denied Program mode requires special privileges, e.g., root or TrustedUser. FAILEDWARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) can not chdir(/var/spool/mqueue-client/): Permission denied Program mode requires special privileges, e.g., root or TrustedUser. FAILEDTest Complete$ WARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) I am guessing i need to do something in sendmail configuration I have googled for some solutions but have ended up more confused. Can someone let me know what configuration I need to change to fix so i can send from www-data user?

    Read the article

  • mdadm email notification - change the default subject

    - by Shirker
    I have entry in my crontab 00 */1 * * * /sbin/mdadm --monitor --scan -1 [email protected] It works more than perfect, but I need to change the default email template. So instead of subject "mdadm monitoring" it wished to be "mdadm monitoring from «IP ADDRESS»" or like that. [root@mail ~]# rpm -ql mdadm-3.2.5-4.el6_4.2.x86_64 | grep -v -E '(man|doc)' /etc/cron.d/raid-check /etc/rc.d/init.d/mdmonitor /etc/sysconfig/raid-check /lib/udev/rules.d/65-md-incremental.rules /sbin/mdadm /sbin/mdmon /usr/sbin/raid-check /var/run/mdadm Is it hardcoded or its possible to change?

    Read the article

  • Troubles logging into VPN on Win7 64bit

    - by mike
    Hi, Before I can successfully logon to my company's VPN it runs a small check in the browser to check my current processes. One of the things it checks (among other things) is to see if you have an antivirus running. If you don't, you can't connect and it says please install one with a couple links to some free ones. The issue I'm having is I recently upgraded to Win7 x64 and I haven't been able to get past the "antivirus check" part. Before, I had AVG running and I never had a single problem for years. Now I tried both AVG and Avast and I still get blocked. Does it have something to do with both of these antiviruses running in *32 mode in the processes? Any help or ideas on how to fix this would be awesome. Thanks!

    Read the article

  • Innodb : cannot allocate the memory for the buffer pool

    - by mingyeow
    My innodb keeps crashing. This is the error message below. Does anyone know why this keeps happening? InnoDB: by InnoDB 49201616 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: Note that in most 32-bit computers the process InnoDB: memory space is limited to 2 GB or 4 GB. InnoDB: We keep retrying the allocation for 60 seconds... 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! InnoDB: Fatal error: cannot allocate the memory for the buffer pool [ERROR] Default storage engine (InnoDB) is not available

    Read the article

  • Setup mutt to read through a threaded message and then return to the email index

    - by dat5h
    I have begun using mutt (terminal email client) recently and I have it setup very nicely and easy to use. My only gripe is that sometimes it is difficult to tell when I transition from one thread to the next. Is it possible to setup the bindings such that moving to the next-undeleted message would check if the next-undeleted message is in the same thread, and if that check is false it returns to the index or throws an error? For example, I currently have bind pager <down> next-undeleted bind pager } next-subthread Is there a more advanced function call that could perform this check? I haven't been able to find much on how to perform more complex function calls with such checks while checking out the manual. Any help would be appreciated.

    Read the article

  • haproxy not passing X_FORWARD_FOR on HTTP POST

    - by Mark L
    Hello, I've setup HAProxy with the option forwardfor option so it'll pass on the user's IP to PHP via $_SERVER[ "HTTP_X_FORWARDED_FOR" ]. If the page request isn't a POST it's populated fine but if it is then it won't be populated. Any ideas where I've gone wrong? Thanks everyone! My whole HAProxy conf file for reference: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 4096 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webfarm :80 mode http balance roundrobin option forwardfor server webA 192.168.240.4 weight 1 maxconn 2048 check server webB 192.168.240.3 weight 1 maxconn 2048 check listen smtp :25 mode tcp option tcplog balance roundrobin server smtp 192.168.240.4:25 check

    Read the article

  • lucid 10.04 LTS => Precise 12.04.1 : upgrade doesn't work

    - by Rastom
    I googled and looked into all unkown issues on ubuntu forums but I can't figure out why a 10.04 LTS server won't detect the last LTS 12.04.1. I guess since 12.04 is a fresh dist, not much is reported for related issues Here is what I did : apt-get update apt-get upgrade apt-get install update-manager-core it was already installed so no update for this package. I checked : /etc/update-manager/release-upgrades [DEFAULT] # Default prompting behavior, valid options: # # never - Never check for a new release. # normal - Check to see if a new release is available. If more than one new # release is found, the release upgrader will attempt to upgrade to # the release that immediately succeeds the currently-running # release. # lts - Check to see if a new LTS release is available. The upgrader # will attempt to upgrade to the first LTS release available after # the currently-running one. Note that this option should not be # used if the currently-running release is not itself an LTS # release, since in that case the upgrader won't be able to # determine if a newer release is available. Prompt=lts I also checked my sourcelist before running apt-get : /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe deb-src http://security.ubuntu.com/ubuntu lucid-security universe deb http://security.ubuntu.com/ubuntu lucid-security multiverse deb-src http://security.ubuntu.com/ubuntu lucid-security multiverse # deb http://landscape.canonical.com/packages/hardy ./ # deb-src http://landscape.canonical.com/packages/hardy ./ and then following Ubuntu guide for Precise upgrade the command below should work : root@xxxxxxxxx:/etc/apt# do-release-upgrade -d Checking for a new ubuntu release No new release found So am I missing something ? The server was accessing outside through a proxy but I grant direct access to this server to avoid any Internet access problem or redirection but no clue... Any help would be appreciated

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • SQL SERVER – Answer – Value of Identity Column after TRUNCATE command

    - by pinaldave
    Earlier I had one conversation with reader where I almost got a headache. I suggest all of you to read it before continuing this blog post SQL SERVER – Reseting Identity Values for All Tables. I believed that he faced this situation because he did not understand the difference between SQL SERVER – DELETE, TRUNCATE and RESEED Identity. I wrote a follow up blog post explaining the difference between them. I asked a small question in the second blog post and I received many interesting comments. Let us go over the question and its answer here one more time. Here is the scenario to set up the puzzle. Create Table with Seed Identity = 11 Insert Value and Check Seed (it will be 11) Reseed it to 1 Insert Value and Check Seed (it will be 2) TRUNCATE Table Insert Value and Check Seed (it will be 11) Let us see the T-SQL Script for the same. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Reseed to 1 DBCC CHECKIDENT ('TestTable', RESEED, 1) GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Truncate table TRUNCATE TABLE [TestTable] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Question for you Here -- Clean up DROP TABLE [TestTable] GO Now let us see the output of three of the select statements. 1) First Select after create table 2) Second Select after reseed table 3) Third Select after truncate table The reason is simple: If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. Reference: Pinal Dave (http://blog.sqlauthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Scan dis problem on xp

    - by Sarfraz Ahmed
    hi, I have four drives on my computer. The problem is that each time i start a computer the scan disk check runs for a drive even if i shut down my computer properly. I ran the thorough scandisk check but still for that drive, the scandisk check is always performed no matter what. I wonder what is wrong although everything is fine and accessible along with drive data. Could you guys please help me out of this? I am using Windows XP SP2. Thanks

    Read the article

  • Scan disk runs on every boot with Windows XP

    - by Sarfraz Ahmed
    I have four drives on my computer. The problem is that each time I start the computer the scan disk check (CHKDSK) runs for a drive even if I shut down my computer properly. I ran the thorough scan disk check but still for that drive, the scan disk check is always performed no matter what. I wonder what is wrong although everything is fine and accessible along with drive data. Could you guys please help me out of this? I am using Windows XP SP2. Thanks.

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Monitor network connectivity in WP7 apps

    - by Daniel Moth
    Most interesting Windows Phone apps rely on some network service for their functionality. So at some point in your app you may need to know programmatically if there is network connection available or not. For example, the Translator by Moth app relies on the Bing Translation service for translations. When a request for translation (text or voice) is made, the network call may fail. The failure reason is not evident from any of the return results, so I check the connection to see if it is present. Dependent on that, a different message is shown to the user. Before the translation phase is even reached, at the app start up time the Bing service is queried for its list of  languages; in that case I don't want to show the user a message and instead want to be notified when the network is available in order to send the query out again. So for those two requirements (which I imagine are common in other apps) I wrote a simple wrapper MyNetwork static class to the framework APIs: Call once MyNetwork.MonitorNetworkAvailability() method so it monitors the network change At any time check the MyNetwork.IsConnected property to check for network presence Subscribe to its MyNetwork.ConnectionEstablished event Optionally, during debugging use its MyNetwork.ChangeStatus method to simulate a change in network status As usual, there may be better ways to achieve this, but this class works perfectly for my scenarios. You can download the code here: MyNetworks.cs. Comments about this post welcome at the original blog.

    Read the article

  • How can I perform 2D side-scroller collision checks in a tile-based map?

    - by bill
    I am trying to create a game where you have a player that can move horizontally and jump. It's kind of like Mario but it isn't a side scroller. I'm using a 2D array to implement a tile map. My problem is that I don't understand how to check for collisions using this implementation. After spending about two weeks thinking about it, I've got two possible solutions, but both of them have some problems. Let's say that my map is defined by the following tiles: 0 = sky 1 = player 2 = ground The data for the map itself might look like: 00000 10002 22022 For solution 1, I'd move the player (the 1) a complete tile and update the map directly. This make the collision easy because you can check if the player is touching the ground simply by looking at the tile directly below the player: // x and y are the tile coordinates of the player. The tile origin is the upper-left. if (grid[x][y+1] == 2){ // The player is standing on top of a ground tile. } The problem with this approach is that the player moves in discrete tile steps, so the animation isn't smooth. For solution 2, I thought about moving the player via pixel coordinates and not updating the tile map. This will make the animation much smoother because I have a smaller movement unit per frame. However, this means I can't really accurately store the player in the tile map because sometimes he would logically be between two tiles. But the bigger problem here is that I think the only way to check for collision is to use Java's intersection method, which means the player would need to be at least a single pixel "into" the ground to register collision, and that won't look good. How can I solve this problem?

    Read the article

  • Checking if an object is inside bounds of an isometric chunk

    - by gopgop
    How would I check if an object is inside the bounds of an isometric chunk? for example I have a player and I want to check if its inside the bounds of this isometric chunk. I draw the isometric chunk's tiles using OpenGL Quads. My first try was checking in a square pattern kind of thing: e = object; this = isometric chunk; if (e.getLocation().getX() < this.getLocation().getX()+World.CHUNK_WIDTH*World.TILE_WIDTH && e.getLocation().getX() > this.getLocation().getX()) { if (e.getLocation().getY() > this.getLocation().getY() && e.getLocation().getY() < this.getLocation().getY()+World.CHUNK_HEIGHT*World.TILE_HEIGHT) { return true; } } return false; What happens here is that it checks in a SQUARE around the chunk so not the real isometric bounds. Image example: (THE RED IS WHERE THE PROGRAM CHECKS THE BOUNDS) What I have now: Desired check: Ultimately I want to do the same for each tile in the chunk. EXTRA INFO: Till now what I had in my game is you could only move tile by tile but now I want them to move freely but I still need them to have a tile location so no matter where they are on the tile their tile location will be that certain tile. then when they are inside a different tile's bounding box then their tile location becomes the new tile. Same thing goes with chunks. the player does have an area but the area does not matter in this case. and as long as the X and Y are inside the bounding box then it should return true. they don't have to be completely on the tile.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >