Search Results

Search found 31278 results on 1252 pages for 'impossible object'.

Page 556/1252 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • GPL vs plugin interfaces not designed with a specific application in mind

    - by Kristóf Marussy
    I am not seeking or in need of legal advice, but an interesting though experiment came to my mind. Imagine the following situtation (I cannot really think about a concrete example and I am unsure if a real manifestation even exists): there is a free (libre) api A licensed under some permissive license or even LGPL. Non-free application B implements this api in order host plugins, but there are other free software doing the same thing. Moreover, there is plugin C acting as a plugin under api A. It links to library D, that is under GPL, so C is also under GPL. Plugins using A are loaded into hosts via a dlopen-like mechanism and use complex data structure for host-plugin communication. Neither B nor C distribute any files that may be required for A to function properly (like headers containing the structure definitions of A or dynamic libraries containing helper functions for A written by the authors of A), but such things may exist. Now some user installs application B and plugin C on his machine, along with anything that may be required for api A to function properly. Then he proceeds and loads C into B and creates some intellectual property with B which is not a piece of software. Did a GPL violation happend at some point, and if so, who violated GPL and why? The authors of C violate D's license by making C possible to be used in non-free host B? This is a possibility because they can't give and exception of GPL (like one described in http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF or http://www.gnu.org/licenses/gpl-faq.html#LinkingOverControlledInterface) due to D's license terms. The authors of B violate C's and D's license by making C possible to be loaded in B? This is a possibility because http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins disallows the mechanisms A uses for communitation between the free and non-free modules. The authors of A, because the api may be used (and in this case, was used) for communication between GPL'd and non-free software. This would be extremely absurd. The user, because at the moment of loading B into C, he made a derived work of C. I think this is impossible, because he does not distribute it. But would the situation change is he decided to release a configuration file of B which makes B load C as a plugin? Nobody, because A counts as a 'system library', and both B and C directly interact only with A, not eachother. In a sane world, this would happen... A concrete example of A could be some kind of audio (think LADSPA) or image processing api. However, I could find no such interface (that is free software, generic and is also implemented by commercial tools). A real-world example could also be quite enlightening.

    Read the article

  • Poner aplicaci&oacute;n Asp.Net en modo OFFLINE

    - by Jason Ulloa
    Una de las opciones que todo aplicación debería tener es el poder ponerse en modo OFFLINE para evitar el acceso de usuarios. Esto es completamente necesario cuando queremos realizar cambios a nuestra aplicación (cambiar algo, poner una actualización, etc) o a nuestra base de datos y evitarnos problemas con los usuarios que se encuentren logueados dentro de la aplicación en ese momento. Muchos ejemplos a través de la Web exponen la forma de realizar esta tarea utilizando dos técnicas: 1. La primera de ellas es utilizar el archivo App_Offline.htm sin embargo, esta técnica tiene un inconveniente. Y es que, una vez que hemos subido el archivo a nuestra aplicación esta se bloquea completamente y no tenemos forma de volver a ponerla ONLINE a menos que eliminemos el archivo. Es decir no podemos controlarla. 2. La segunda de ellas es el utilizar la etiqueta httpRuntime, pero nuevamente tenemos el mismo problema. Al habilitar el modo OFFLINE mediante esta etiqueta, tampoco podremos acceder a un modo de administración para cambiarla. Un ejemplo de la etiqueta httpRuntime <configuration> <system.web> <httpRuntime enable="false" /> </system.web> </configuration>   Tomando en cuenta lo anterior, lo mas optimo seria que podamos por medio de alguna pagina de administración colocar nuestro sitio en modo OFFLINE, pero manteniendo el acceso a la pagina de administración para poder volver a cambiar el valor que pondrá nuestra aplicación nuevamente en modo ONLINE. Para ello, utilizaremos el web.config de nuestra aplicación y una pequeña clase que se encargara de Leer y escribir los valores. Lo primero será, abrir nuestro web.config y definir dentro del appSettings dos nuevas KEY que contendrán los valores para el modo OFFLINE de nuestra aplicación: <appSettings> <add key="IsOffline" value="false" /> <add key="IsOfflineMessage" value="Sistema temporalmente no disponible por tareas de mantenimiento." /> </appSettings>   En las KEY anteriores tenemos el IsOffLine con value de false, esto es para indicarle a nuestra aplicación que actualmente su modo de funcionamiento es ONLINE, este valor será el que posteriormente cambiemos a TRUE para volver al modo OFFLINE. Nuestra segunda KEY (IsOfflineMessage) posee el value (Sistema temporalmente….) que será mostrado al usuario como un mensaje cuando el sitio este en modo OFFLINE. Una vez definidas nuestras dos KEY en el web.config, escribiremos una clase personalizada para leer y escribir los valores. Así que, agregamos un nuevo elemento de tipo clase al proyecto llamado SettingsRules y la definimos como Public. Está clase contendrá dos métodos, el primero será para leer los valores: public string readIsOnlineSettings(string sectionToRead) { Configuration cfg = WebConfigurationManager.OpenWebConfiguration(System.Web.Hosting.HostingEnvironment.ApplicationVirtualPath); KeyValueConfigurationElement isOnlineSettings = (KeyValueConfigurationElement)cfg.AppSettings.Settings[sectionToRead]; return isOnlineSettings.Value; }   El segundo método, será el encargado de escribir los nuevos valores al web.config public bool saveIsOnlineSettings(string sectionToWrite, string value) { bool succesFullySaved;   try { Configuration cfg = WebConfigurationManager.OpenWebConfiguration(System.Web.Hosting.HostingEnvironment.ApplicationVirtualPath); KeyValueConfigurationElement repositorySettings = (KeyValueConfigurationElement)cfg.AppSettings.Settings[sectionToWrite];   if (repositorySettings != null) { repositorySettings.Value = value; cfg.Save(ConfigurationSaveMode.Modified); } succesFullySaved = true; } catch (Exception) { succesFullySaved = false; } return succesFullySaved; }   Por último, definiremos en nuestra clase una región llamada instance, que contendrá un método encargado de devolver una instancia de la clase (esto para no tener que hacerlo luego) #region instance   private static SettingsRules m_instance;   // Properties public static SettingsRules Instance { get { if (m_instance == null) { m_instance = new SettingsRules(); } return m_instance; } }   #endregion instance   Con esto, nuestra clase principal esta completa. Así que pasaremos a la implementación de las páginas y el resto de código que completará la funcionalidad.   Para complementar la tarea del web.config utilizaremos el fabuloso GLOBAL.ASAX, este contendrá el código encargado de detectar si nuestra aplicación tiene el valor de ONLINE o OFFLINE y además de bloquear todas las paginas y directorios excepto el que le hayamos definido como administrador, esto para luego poder volver a configurar el sitio.   El evento del Global.Asax que utilizaremos será el Application_BeginRequest   protected void Application_BeginRequest(Object sender, EventArgs e) {   if (Convert.ToBoolean(SettingsRules.Instance.readIsOnlineSettings("IsOffline"))) {   string Virtual = Request.Path.Substring(0, Request.Path.LastIndexOf("/") + 1);   if (Virtual.ToLower().IndexOf("/admin/") == -1) { //We don't makes action, is admin section Server.Transfer("~/TemporarilyOfflineMessage.aspx"); }   } } La primer Línea del IF, verifica si el atributo del web.config es True o False, si es true toma la dirección WEB que se ha solicitado y la incluimos en un IF para verificar si corresponde a la Sección admin (está sección no es mas que un folder en nuestra aplicación llamado admin y puede ser cambiado a cualquier otro). Si el resultado de ese if es –1 quiere decir que no coincide, entonces, esa será la bandera que nos permitirá bloquear inmediatamente la pagina actual, transfiriendo al usuario a una pagina de mantenimiento. Ahora, en nuestra carpeta Admin crearemos una nueva pagina asp.net llamada OnlineSettings.aspx para actualizar y leer los datos del web.config y una pagina Default.aspx para pruebas. Nuestra página OnlineSettings tendrá dos pasos importantes: 1. Leer los datos actuales de configuración protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { IsOffline.Checked = Convert.ToBoolean(mySettings.readIsOnlineSettings("IsOffline")); OfflineMessage.Text = mySettings.readIsOnlineSettings("IsOfflineMessage"); } }   2. Actualizar los datos con los nuevos valores. protected void UpdateButton_Click(object sender, EventArgs e) { string htmlMessage = OfflineMessage.Text.Replace(Environment.NewLine, "<br />");   // Update the Application variables Application.Lock(); if (IsOffline.Checked) { mySettings.saveIsOnlineSettings("IsOffline", "True"); mySettings.saveIsOnlineSettings("IsOfflineMessage", htmlMessage); } else { mySettings.saveIsOnlineSettings("IsOffline", "false"); mySettings.saveIsOnlineSettings("IsOfflineMessage", htmlMessage); }   Application.UnLock(); }   Por último en la raíz de la aplicación, crearemos una nueva página aspx llamada TemporarilyOfflineMessage.aspx que será la que se muestre cuando se bloquee la aplicación. Al final nuestra aplicación se vería algo así Página bloqueada Configuración del Bloqueo Y para terminar la aplicación de ejemplo

    Read the article

  • Creating a two-way Forest trust with Powershell

    - by Michel Klomp
    Here is a small Powershell script for creating a two-way forest trust. $localforest = [System.DirectoryServices.ActiveDirectory.Forest]::getCurrentForest() $strRemoteForest = ‘domain.local’ $strRemoteUser = ‘administrator’ $strRemotePassword = ‘P@ssw0rd’ $remoteContext = New-Object System.DirectoryServices.ActiveDirectory.DirectoryContext(‘Forest’, $strRemoteForest,$strRemoteUser,$strRemotePassword) $remoteForest = [System.DirectoryServices.ActiveDirectory.Forest]::getForest($remoteContext) $localForest.CreateTrustRelationship($remoteForest,’Bidirectional’)

    Read the article

  • SQL Saturday #162 Cambridge

    - by Most Valuable Yak (Rob Volk)
    Despite the efforts of American Airlines, this past weekend I attended the first SQL Saturday in the UK!  Hosted by the SQLCambs Chapter of PASS and organized by Mark (b|t) & Lorraine Broadbent, ably assisted by John Martin (b|t), Mark Pryce-Maher (b|t) and other folks whose names I've unfortunately forgotten, it was held at the Crowne Plaza Hotel, which is completely surrounded by Cambridge University. On Friday, they presented 3 pre-conference sessions given by the brilliant American Cloud & DBA Guru, Buck Woody (b|t), the brilliant Danish SQL Server Internals Guru, Mark Rasmussen (b|t), and the brilliant Scottish Business Intelligence Guru and recent Outstanding Pass Volunteer, Jen Stirrup (b|t).  While I would have loved to attend any of their pre-cons (having seen them present several times already), finances and American Airlines ultimately made that impossible.  But not to worry, I caught up with them during the regular sessions and at the speaker dinner.  And I got back the money they all owed me.  (Actually I owed Mark some money) The schedule was jam-packed even with only 4 tracks, there were 8 regular slots, a lunch session for sponsor presentations, and a 15 minute keynote given by Buck Woody, who besides giving an excellent history of SQL Server at Microsoft (and before), also explained the source of the "unknown contact" image that appears in Outlook.  Hint: it's not Buck himself. Amazingly, and against all better judgment, I even got to present at SQL Saturday 162!  I did a 5 minute Lightning Talk on Regular Expressions in SSMS.  I then did a regular 50 minute session on Constraints.  You can download the content for the regular session at that link, and for the regular expression presentation here. I had a great time and had a great audience for both of my sessions.  You would never have guessed this was the first event for the organizers, everything went very smoothly, especially for the number of attendees and the relative smallness of the space.  The event sponsors also deserve a lot of credit for making themselves fit in a small area and for staying through the entire event until the giveaways at the very end. Overall this was one of the best SQL Saturdays I've ever attended and I have to congratulate Mark B, Lorraine, John, Mark P-M, and all the volunteers and speakers for making this an astoundingly hard act to follow!  Well done!

    Read the article

  • const vs. readonly for a singleton

    - by GlenH7
    First off, I understand there are folk who oppose the use of singletons. I think it's an appropriate use in this case as it's constant state information, but I'm open to differing opinions / solutions. (See The singleton pattern and When should the singleton pattern not be used?) Second, for a broader audience: C++/CLI has a similar keyword to readonly with initonly, so this isn't strictly a C# type question. (Literal field versus constant variable in C++/CLI) Sidenote: A discussion of some of the nuances on using const or readonly. My Question: I have a singleton that anchors together some different data structures. Part of what I expose through that singleton are some lists and other objects, which represent the necessary keys or columns in order to connect the linked data structures. I doubt that anyone would try to change these objects through a different module, but I want to explicitly protect them from that risk. So I'm currently using a "readonly" modifier on those objects*. I'm using readonly instead of const with the lists as I read that using const will embed those items in the referencing assemblies and will therefore trigger a rebuild of those referencing assemblies if / when the list(s) is/are modified. This seems like a tighter coupling than I would want between the modules, but I wonder if I'm obsessing over a moot point. (This is question #2 below) The alternative I see to using "readonly" is to make the variables private and then wrap them with a public get. I'm struggling to see the advantage of this approach as it seems like wrapper code that doesn't provide much additional benefit. (This is question #1 below) It's highly unlikely that we'll change the contents or format of the lists - they're a compilation of things to avoid using magic strings all over the place. Unfortunately, not all the code has converted over to using this singleton's presentation of those strings. Likewise, I don't know that we'd change the containers / classes for the lists. So while I normally argue for the encapsulations advantages a get wrapper provides, I'm just not feeling it in this case. A representative sample of my singleton public sealed class mySingl { private static volatile mySingl sngl; private static object lockObject = new Object(); public readonly Dictionary<string, string> myDict = new Dictionary<string, string>() { {"I", "index"}, {"D", "display"}, }; public enum parms { ABC = 10, DEF = 20, FGH = 30 }; public readonly List<parms> specParms = new List<parms>() { parms.ABC, parms.FGH }; public static mySingl Instance { get { if(sngl == null) { lock(lockObject) { if(sngl == null) sngl = new mySingl(); } } return sngl; } } private mySingl() { doSomething(); } } Questions: Am I taking the most reasonable approach in this case? Should I be worrying about const vs. readonly? is there a better way of providing this information?

    Read the article

  • Unity: Spin wheels to move vehicle

    - by Paul Manta
    I am just getting started with Unity and I'd like to ask a question. If I have a "Vehicle" object that has two children: "FrontWheel" and "BackWheel" (both 'wheels' are cylinders), how should I set everything up such that I can move the entire vehicle by turning its wheels? When I apply a torque to "FrontWheel", the vehicle starts to move, but instead of the whole thing the moving together, the chassis is rolling on the cylinders and eventually falls off. How can I prevent it from doing that?

    Read the article

  • 2D game editor with SDK or open format (Windows)

    - by Edward83
    I need 2d editor (Windows) for game like rpg. Mostly important features for me: Load tiles as classes with attributes, for example "tile1 with coordinates [25,30] is object of class FlyingMonster with speed=1.0f"; Export map to my own format (SDK) or open format which I can convert to my own; As good extension feature will be multi-tile brush. I wanna to choose one or many tiles into one brush and spread it on canvas.

    Read the article

  • Titan – The Mystery of the Missing Waves

    - by Akemi Iwaya
    The moon Titan holds a unique distinction…it is the only other known ‘object’ in our solar system with a dense atmosphere and stable bodies of surface liquid. Unlike Earth though, there are no waves on Titan. This mystery has planetary scientists baffled and on a quest to understand this incredible phenomenon. Titan: The Mystery of the Missing Waves [YouTube]     

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • PostSharp, Obfuscation, and IL

    - by Simon Cooper
    Aspect-oriented programming (AOP) is a relatively new programming paradigm. Originating at Xerox PARC in 1994, the paradigm was first made available for general-purpose development as an extension to Java in 2001. From there, it has quickly been adapted for use in all the common languages used today. In the .NET world, one of the primary AOP toolkits is PostSharp. Attributes and AOP Normally, attributes in .NET are entirely a metadata construct. Apart from a few special attributes in the .NET framework, they have no effect whatsoever on how a class or method executes within the CLR. Only by using reflection at runtime can you access any attributes declared on a type or type member. PostSharp changes this. By declaring a custom attribute that derives from PostSharp.Aspects.Aspect, applying it to types and type members, and running the resulting assembly through the PostSharp postprocessor, you can essentially declare 'clever' attributes that change the behaviour of whatever the aspect has been applied to at runtime. A simple example of this is logging. By declaring a TraceAttribute that derives from OnMethodBoundaryAspect, you can automatically log when a method has been executed: public class TraceAttribute : PostSharp.Aspects.OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Entering {0}.{1}.", method.DeclaringType.FullName, method.Name)); } public override void OnExit(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Leaving {0}.{1}.", method.DeclaringType.FullName, method.Name)); } } [Trace] public void MethodToLog() { ... } Now, whenever MethodToLog is executed, the aspect will automatically log entry and exit, without having to add the logging code to MethodToLog itself. PostSharp Performance Now this does introduce a performance overhead - as you can see, the aspect allows access to the MethodBase of the method the aspect has been applied to. If you were limited to C#, you would be forced to retrieve each MethodBase instance using Type.GetMethod(), matching on the method name and signature. This is slow. Fortunately, PostSharp is not limited to C#. It can use any instruction available in IL. And in IL, you can do some very neat things. Ldtoken C# allows you to get the Type object corresponding to a specific type name using the typeof operator: Type t = typeof(Random); The C# compiler compiles this operator to the following IL: ldtoken [mscorlib]System.Random call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle( valuetype [mscorlib]System.RuntimeTypeHandle) The ldtoken instruction obtains a special handle to a type called a RuntimeTypeHandle, and from that, the Type object can be obtained using GetTypeFromHandle. These are both relatively fast operations - no string lookup is required, only direct assembly and CLR constructs are used. However, a little-known feature is that ldtoken is not just limited to types; it can also get information on methods and fields, encapsulated in a RuntimeMethodHandle or RuntimeFieldHandle: // get a MethodBase for String.EndsWith(string) ldtoken method instance bool [mscorlib]System.String::EndsWith(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle) // get a FieldInfo for the String.Empty field ldtoken field string [mscorlib]System.String::Empty call class [mscorlib]System.Reflection.FieldInfo [mscorlib]System.Reflection.FieldInfo::GetFieldFromHandle( valuetype [mscorlib]System.RuntimeFieldHandle) These usages of ldtoken aren't usable from C# or VB, and aren't likely to be added anytime soon (Eric Lippert's done a blog post on the possibility of adding infoof, methodof or fieldof operators to C#). However, PostSharp deals directly with IL, and so can use ldtoken to get MethodBase objects quickly and cheaply, without having to resort to string lookups. The kicker However, there are problems. Because ldtoken for methods or fields isn't accessible from C# or VB, it hasn't been as well-tested as ldtoken for types. This has resulted in various obscure bugs in most versions of the CLR when dealing with ldtoken and methods, and specifically, generic methods and methods of generic types. This means that PostSharp was behaving incorrectly, or just plain crashing, when aspects were applied to methods that were generic in some way. So, PostSharp has to work around this. Without using the metadata tokens directly, the only way to get the MethodBase of generic methods is to use reflection: Type.GetMethod(), passing in the method name as a string along with information on the signature. Now, this works fine. It's slower than using ldtoken directly, but it works, and this only has to be done for generic methods. Unfortunately, this poses problems when the assembly is obfuscated. PostSharp and Obfuscation When using ldtoken, obfuscators don't affect how PostSharp operates. Because the ldtoken instruction directly references the type, method or field within the assembly, it is unaffected if the name of the object is changed by an obfuscator. However, the indirect loading used for generic methods was breaking, because that uses the name of the method when the assembly is put through the PostSharp postprocessor to lookup the MethodBase at runtime. If the name then changes, PostSharp can't find it anymore, and the assembly breaks. So, PostSharp needs to know about any changes an obfuscator does to an assembly. The way PostSharp does this is by adding another layer of indirection. When PostSharp obfuscation support is enabled, it includes an extra 'name table' resource in the assembly, consisting of a series of method & type names. When PostSharp needs to lookup a method using reflection, instead of encoding the method name directly, it looks up the method name at a fixed offset inside that name table: MethodBase genericMethod = typeof(ContainingClass).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: get_Prop1 21: set_Prop1 22: DoFoo 23: GetWibble When the assembly is later processed by an obfuscator, the obfuscator can replace all the method and type names within the name table with their new name. That way, the reflection lookups performed by PostSharp will now use the new names, and everything will work as expected: MethodBase genericMethod = typeof(#kGy).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: #kkA 21: #zAb 22: #EF5a 23: #2tg As you can see, this requires direct support by an obfuscator in order to perform these rewrites. Dotfuscator supports it, and now, starting with SmartAssembly 6.6.4, SmartAssembly does too. So, a relatively simple solution to a tricky problem, with some CLR bugs thrown in for good measure. You don't see those every day!

    Read the article

  • Handling permissions in a MVP application

    - by Chathuranga
    In a windows forms payroll application employing MVP pattern (for a small scale client) I'm planing user permission handling as follows (permission based) as basically its implementation should be less complicated and straight forward. NOTE : System could be simultaneously used by few users (maximum 3) and the database is at the server side. This is my UserModel. Each user has a list of permissions given for them. class User { string UserID { get; set; } string Name { get; set; } string NIC {get;set;} string Designation { get; set; } string PassWord { get; set; } List <string> PermissionList = new List<string>(); bool status { get; set; } DateTime EnteredDate { get; set; } } When user login to the system it will keep the current user in memory. For example in BankAccountDetailEntering view I control the controller permission as follows. public partial class BankAccountDetailEntering : Form { bool AccountEditable {get; set;} private void BankAccountDetailEntering_Load(object sender, EventArgs e) { cmdEditAccount.enabled = false; OnLoadForm (sender, e); // Event fires... If (AccountEditable ) { cmdEditAccount.enabled=true; } } } In this purpose my all relevant presenters (like BankAccountDetailPresenter) should aware of UserModel as well in addition to the corresponding business Model it is presenting to the View. class BankAccountDetailPresenter { BankAccountDetailEntering _View; BankAccount _Model; User _UserModel; DataService _DataService; BankAccountDetailPresenter( BankAccountDetailEntering view, BankAccount model, User userModel, DataService dataService ) { _View=view; _Model = model; _UserModel = userModel; _DataService = dataService; WireUpEvents(); } private void WireUpEvents() { _View.OnLoadForm += new EventHandler(_View_OnLoadForm); } private void _View_OnLoadForm(Object sender, EventArgs e) { foreach(string s in _UserModel.PermissionList) { If( s =="CanEditAccount") { _View.AccountEditable =true; return; } } } public Show() { _View.ShowDialog(); } } So I'm handling the user permissions in the presenter iterating through the list. Should this be performed in the Presenter or View? Any other more promising ways to do this? Thanks.

    Read the article

  • Returning null vs Throwing exceptions

    - by Svish
    Is in a bit of disagreement with a more experienced developer on this issue, and was wondering what you guys here think about this. Environment is Java, EJB 3, services, etc. The code I wrote calls a service to get things and to create things. Problem was that I got null pointer exceptions in places that didn't make sense. For example when I asked the service to create an object, I got null back. And when I tried to look up an object with an id I knew existed, I still got null back. Was like it was ignoring me. Spent some time trying to figure out what was wrong in my code (since I'm less experienced I usually assume I have messed up). Turns out the reason was security. If the user principal using my service didn't have the right permissions to use the service I called from my service, then that service simply returned null. The services that are here already are usually not documented either, so this is just something you have to know... somehow... So here is the thing: I mean that this is rather confusing as a developer interacting with this service. To me it would make much more sense if that service thew an exception which would tell me that hey, you don't have the proper permissions to get info about this thing or to create this new thing. I would then immediately know why my service wasn't working as expected. However, he argued that asking is not wrong. Exceptions should only be thrown when there is an error and asking for a thing is not an error. Even if you don't have permission to "see" that the thing you asked for. The things are often looked up in a GUI by users and for those users not having the right permissions, these things simply "do not exist". So, in short: Asking is not wrong, hence no exception. Get methods return null because to those users those things "doesn't exist". Create methods return null because nothing was created, since the user wasn't allowed to create anything. So, what do you guys think? Is this normal and/or good practice? I prefer exceptions as I prefer throwing and catching exceptions because I find it much easier to know what's going on. So I would for example also prefer to throw a NotFoundException if you asked for an id which didn't exist, rather than returning null. Anyways, just curious to what others think about this as I'm not the most experienced developer yet.

    Read the article

  • ASP.NET Hosting :: ASP.NET File Upload Control

    - by mbridge
    The asp.net FileUpload control allows a user to browse and upload files to the web server. From developers perspective, it is as simple as dragging and dropping the FileUpload control to the aspx page. An extra control, like a Button control, or some other control is needed, to actually save the file. <asp:FileUploadID="FileUpload1"runat="server"/> <asp:ButtonID="B1"runat="server"Text="Save"OnClick="B1_Click"/> By default, the FileUpload control allows a maximum of 4MB file to be uploaded and the execution timeout is 110 seconds. These properties can be changed from within the web.config file’s httpRuntime section. The maxRequestLength property determines the maximum file size that can be uploaded. The executionTimeout property determines the maximum time for execution. <httpRuntimemaxRequestLength="8192"executionTimeout="220"/> From code behind, the mime type, size of the file, file name and the extension of the file can be obtained. The maximum file size that can be uploaded can be obtained and modified using the System.Web.Configuration.HttpRuntimeSection class. Files can be alternatively saved using the System.IO.HttpFileCollection class. This collection class can be populated using the Request.Files property. The collection contains HttpPostedFile class which contains a reference to the class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.IO; using System.Configuration; using System.Web.Configuration;   namespace WebApplication1 {     public partial class WebControls : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           //Using FileUpload control to upload and save files         protected void B1_Click(object sender, EventArgs e)         {             if (FileUpload1.HasFile && FileUpload1.PostedFile.ContentLength > 0)             {                 //mime type of the uploaded file                 string mimeType = FileUpload1.PostedFile.ContentType;                   //size of the uploaded file                 int size = FileUpload1.PostedFile.ContentLength; // bytes                   //extension of the uploaded file                 string extension = System.IO.Path.GetExtension(FileUpload1.FileName);                                  //save file                 string path = Server.MapPath("path");                                 FileUpload1.SaveAs(path + FileUpload1.FileName);                              }             //maximum file size allowed             HttpRuntimeSection rt = new HttpRuntimeSection();             rt.MaxRequestLength = rt.MaxRequestLength * 2;             int length = rt.MaxRequestLength;                     //execution timeout             TimeSpan ts = rt.ExecutionTimeout;             double secomds = ts.TotalSeconds;           }           //Using Request.Files to save files         private void AltSaveFile()         {             HttpFileCollection coll = Request.Files;             for (int i = 0; i < coll.Count; i++)             {                 HttpPostedFile file = coll[i];                   if (file.ContentLength > 0)                     ;//do something             }         }     } }

    Read the article

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • Ensure your view and function meta data is upto date.

    - by simonsabin
    You will see if you use views and functions that SQL Server holds the rowset metadata for this in system tables. This means that if you change the underlying tables, columns and data types your views and functions can be out of sync. This is especially the case with views and functions that use select * To get the metadata to be updated you need to use sp_refreshsqlmodule. This forces the object to be “re run” into the database and the meta data updated. Thomas mentioned sp_refreshview which is a...(read more)

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 1, 2012

    - by Bob Rhubart
    Hurricane Sandy Edition Power outages in the Cleveland area made it impossible to publish posts on Tuesday and Wednesday. In my neighborhood most are still without power. The sound of howling winds that dominated on Monday and Tuesday has been replaced by the sound of of portable generators. My internet connection was restored only after AT&T U-Verse crewmen hooked up a portable generator to power the relay station up the street. Bear in mind that Cleveland is 500 miles from the Atlantic coast. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over... And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Eating our own dog food – Oracle’s internal deployment of Oracle IDM Oracle Fusion Middleware A-Team member Brian Eidelman recommends the recent podcast on Oracle’s internal deployment of Oracle OAM and OID. "This was a big project that involved migrating a bunch of critical, high volume applications to leverage OAM and OID," says Eidelman. "So I suggest you tune in to see and hear more about how we deploy our own software." Thought for the Day "Anyone who says they're not afraid at the time of a hurricane is either a fool or a liar, or a little bit of both." — Anderson Cooper Source: BrainyQuote

    Read the article

  • Merge Records in Session bean by using ADF Drag/Drop

    - by shantala.sankeshwar
    This article describes how to merge multiple selected records in Session Bean using ADF drag & drop feature. Below described is simple use case that shows how exactly this can be achieved. Here we will have table & user input field.Table shows  EMP records & user input field accepts Salary.When we drag & drop multiple records on user input field,the selected records get updated with the new Salary provided. Steps: Let us suppose that we have created Java EE Web Application with Entities from Emp table.Then create EJB Session Bean & generate Data control for the same. Write a simple code in sessionEJBBean & expose this method to local interface :  public void updateEmprecords(List empList, Object sal) {       Emp emp = null;       for (int i = 0; i < empList.size(); i++)       {        emp = em.find(Emp.class, empList.get(i));         emp.setSal((BigDecimal)sal);       }      em.merge(emp);   } Now let us create updateEmpRecords.jspx page in viewController project & Drop empFindAll object as ADF Table Define custom SelectionListener method for the table :   public void selectionListener(SelectionEvent selectionEvent)     {     // This method gets the Empno of the selected record & stores in the list object      UIXTable table = (UIXTable)selectionEvent.getComponent();      FacesCtrlHierNodeBinding fcr      =(FacesCtrlHierNodeBinding)table.getSelectedRowData();      Number empNo = (Number)fcr.getAttribute("empno") ;      this.getSelectedRowsList().add(empNo);     }Set table's selectedRowKeys to #{bindings.empFindAll.collectionModel.selectedRow}"Drop inputText on the same jspx page that accepts Salary .Now we would like to drag records from the above table & drop that on the inputtext field.This feature can be achieved by inserting dragSource operation inside the table & dropTraget operation inside the inputText:<af:dragSource discriminant="tab"/> //Insert this inside the table<af:inputText label="Enter Salary" id="it13" autoSubmit="true"       binding="# {test.deptValue}">       <af:dropTarget dropListener="#{test.handleTableDrop}">       <af:dataFlavor        flavorClass="org.apache.myfaces.trinidad.model.RowKeySet"    discriminant="tab"/>       </af:dropTarget>       <af:convertNumber/> </af:inputText> In the above code when the user drags & drops multiple records on inputText,the dropListener method gets called.Goto the respective page definition file & create updateEmprecords method action& execute action dropListener method code:        public DnDAction handleTableDrop(DropEvent dropEvent)        {          //Below code gets the updateEmprecords method,passes parameters & executes method            DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);            RowKeySet droppedKeySet = dropEvent.getTransferable().getData(df);            if (droppedKeySet != null && droppedKeySet.size() > 0)           {                  DCBindingContainer bindings =                  (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();                  OperationBinding updateEmp;                  updateEmp= bindings.getOperationBinding("updateEmprecords");                  updateEmp.getParamsMap().put("sal",                  this.getDeptValue().getAttributes().get("value"));                            updateEmp.getParamsMap().put("empList", this.getSelectedRowsList());                  updateEmp.execute(); //Below code performs execute operation to refresh the updated records                 OperationBinding executeBinding;                 executeBinding= bindings.getOperationBinding("Execute");                 executeBinding.execute(); AdfFacesContext.getCurrentInstance().addPartialTarget(dropEvent.getDragComponent());                this.getSelectedRowsList().clear();          }                 return DnDAction.NONE;        }Run updateEmpRecords.jspx page & enter any Salary say '5000'.Select multiple records in table & drop these selected records on the inputText Salary. Note that all the selected records salary value gets updated to 5000.Technorati Tags: ADF Drag and drop,EJB Session bean,ADF table,inputText,DropEvent  

    Read the article

  • SCORM and the Learning Management System (LMS)

    What actually is SCORM? SCORM, Shareable Content Object Reference Model, is a standard for web-based e-learning that has been developed to define communication between client-side content and a runti... [Author: Stuart Campbell - Computers and Internet - October 05, 2009]

    Read the article

  • PayPal India Problems Continues

    - by Ravish
    Reserve Bank of India has been giving hard time to PayPal and its users in India. RBI had previously blocked PayPal transactions in India a few times, and they made it difficult to withdraw payments by enforcing exports and forex related compliance. Here is yet another bad news for Indian PayPal users. With effect from March 1st, Indian users cannot receive payments of more than $500 in your PayPal account. Moreover, you cannot keep or use any funds in your PayPal account. You can use your PayPal balance to make send money for any goods or services, and must withdraw it to your bank account within 7 days of the receipt. These changes have rendered PayPal almost useless for small business, webmasters and publishers. Most webmasters and publishers rely on PayPal to receive payments from advertisers and clients. It has also made it impossible to buy anything online with PayPal. Sending payments abroad via other channels is already a pain, sending a bank wire requires too many formalities, documentation and time. Moreover, you are even required to deduct TDS on payments you make for any products or services. The restrictions will take effect on March 1st, so you have 30 days to complete any pending transactions you may have. This step by RBI is yet another gimmick by corrupt Indian Government to make life difficult of entrepreneurs, kill innovation, slap more taxes and create more channels to take bribes. Following is the notification from PayPal about this issue: As part of our commitment to provide a high level of customer service, we would like to give you a 30-day advance notice on changes to our user agreement for India. With effect from 1 March 2011, you are required to comply with the requirements set out in the notification of the Reserve Bank of India governing the processing and settlement of export-related receipts facilitated by online payment gateways (“RBI Guidelines”). In order to comply with the RBI Guidelines, our user agreement in India will be amended for the following services as follows: Any balance in and all future payments into your PayPal account may not be used to buy goods or services and must be transferred to your bank account in India within 7 days from the receipt of confirmation from the buyer in respect of the goods or services; and Export-related payments for goods and services into your PayPal account may not exceed US$500 per transaction. We seek your understanding as we continue to employ our best efforts to comply with the RBI Guidelines in a timely manner. Related posts:WordCamp India Ends On a High Note Silicon WordPress Theme Accord WordPress Theme

    Read the article

  • Calling methods on Objects

    - by Mashael
    Let's say we have a class called 'Automobile' and we have an instance of that class called 'myCar'. I would like to ask why do we need to put the values that our methods return in a variable for the object? Why just don't we call the method? For example: Why should we write: string message = myCar.SpeedMessage(); Console.WriteLine(message); instead of: Console.WriteLine(myCar.SpeedMessage());

    Read the article

  • schedule compliance and keeping technical supports and resolving issues

    - by imays
    I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they "NO, because it is impossible to share knowledge and design document fully." They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects).

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Scaling Skeletal values to be able to reach objects on the screen

    - by Sweta Dwivedi
    I have created a game using Kinect + XNA and the game runs on full screen mode.. However when i try to touch or reach a certain area on the screen using hand.. I cant reach it.. I will already be outside the range of the sensor trying to touch the object on the game screen.. Is there anyway I can scale the skeletal values so that the users can easily touch objects on the screen without having to stretch or bend too much?

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >