Search Results

Search found 13344 results on 534 pages for 'anonymous inner classes'.

Page 98/534 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • CSS Child selectors in IE7 tables

    - by John
    I'm trying to use the CSS child selector in IE7, and it doesn't seem to work. I have nested tables. My outer table has a class name "mytable", and I want the td's of the outer table to show borders. I don't want the inner table td's to have borders. I think I should be able to have CSS that looks like this: .mytable { border-style: solid } .mytable>tr>td { border-style: solid } But the second line seems to have no effect. If I change the second line to make it less specific, it applies to all the td's - I see too many borders. td { border-style: solid } So I think it really is just an issue with the selectors. Pages like this suggest that IE7 should be able to do what I want. Am I doing something silly? Here's the whole HTML file: <html> <head> <style type="text/css"> .mytable { border-style: solid; border-collapse: collapse;} td { border-style: solid; } </style> </head> <body> <table class="mytable"> <tr> <td>Outer top-left</td> <td>Outer top-right</td> </tr> <tr> <td>Outer bottom-left</td> <td> <table> <tr> <td>Inner top-left</td> <td>Inner top-right</td> </tr> <tr> <td>Inner bottom-left</td> <td>Inner bottom-right</td> </tr> <table> </td> </tr> <table> </body> </html>

    Read the article

  • Postgres Stored procedure using iBatis

    - by Om Yadav
    --- The error occurred while applying a parameter map. --- Check the newSubs-InlineParameterMap. --- Check the statement (query failed). --- Cause: org.postgresql.util.PSQLException: ERROR: wrong record type supplied in RETURN NEXT Where: PL/pgSQL function "getnewsubs" line 34 at return next the function detail is as below.... CREATE OR REPLACE FUNCTION getnewsubs(timestamp without time zone, timestamp without time zone, integer) RETURNS SETOF record AS $BODY$declare v_fromdt alias for $1; v_todt alias for $2; v_domno alias for $3; v_cursor refcursor; v_rec record; v_cpno bigint; v_actno int; v_actname varchar(50); v_actid varchar(100); v_cpntypeid varchar(100); v_mrp double precision; v_domname varchar(100); v_usedt timestamp without time zone; v_expirydt timestamp without time zone; v_createdt timestamp without time zone; v_ctno int; v_phone varchar; begin open v_cursor for select cpno,c.actno,usedt from cpnusage c inner join account s on s.actno=c.actno where usedt = $1 and usedt < $2 and validdomstat(s.domno,v_domno) order by c.usedt; fetch v_cursor into v_cpno,v_actno,v_usedt; while found loop if isactivation(v_cpno,v_actno,v_usedt) IS TRUE then select into v_actno,v_actname,v_actid,v_cpntypeid,v_mrp,v_domname,v_ctno,v_cpno,v_usedt,v_expirydt,v_createdt,v_phone a.actno,a.actname as name,a.actid as actid,c.descr as cpntypeid,l.mrp as mrp,s.domname as domname,c.ctno as ctno,b.cpno,b.usedt,b.expirydt,d.createdt,a.phone from account a inner join cpnusage b on a.actno=b.actno inner join cpn d on b.cpno=d.cpno inner join cpntype c on d.ctno=c.ctno inner join ssgdom s on a.domno=s.domno left join price_class l ON l.price_class_id=b.price_class_id where validdomstat(a.domno,v_domno) and b.cpno=v_cpno and b.actno=v_actno; select into v_rec v_actno,v_actname,v_actid,v_cpntypeid,v_mrp,v_domname,v_ctno,v_cpno,v_usedt,v_expirydt,v_createdt,v_phone; return next v_rec; end if; fetch v_cursor into v_cpno,v_actno,v_usedt; end loop; return ; end;$BODY$ LANGUAGE 'plpgsql' VOLATILE; ALTER FUNCTION getnewsubs(timestamp without time zone, timestamp without time zone, integer) OWNER TO radius If i am running the function from the console it is running fine and giving me the correct response. But when using through java causing the above error. Can ay body help in it..Its very urgent. Please response as soon as possible. Thanks in advance.

    Read the article

  • Returning JSON in CFFunction and appending it to layer is causing an error

    - by Mel
    I'm using the qTip jQuery plugin to generate a dynamic tooltip. I'm getting an error in my JS, and I'm unsure if its source is the JSON or the JS. The tooltip calls the following function: (sorry about all this code, but it's necessary) <cffunction name="fGameDetails" access="remote" returnType="any" returnformat="JSON" output="false" hint="This grabs game details for the games.cfm page"> <!---Argument, which is the game ID---> <cfargument name="gameID" type="numeric" required="true" hint="CFC will look for GameID and retrieve its details"> <!---Local var---> <cfset var qGameDetails = ""> <!---Database query---> <cfquery name="qGameDetails" datasource="#REQUEST.datasource#"> SELECT titles.titleName AS tName, titles.titleBrief AS tBrief, games.gameID, games.titleID, games.releaseDate AS rDate, genres.genreName AS gName, platforms.platformAbbr AS pAbbr, platforms.platformName AS pName, creviews.cReviewScore AS rScore, ratings.ratingName AS rName FROM games Inner Join platforms ON platforms.platformID = games.platformID Inner Join titles ON titles.titleID = games.titleID Inner Join genres ON genres.genreID = games.genreID Inner Join creviews ON games.gameID = creviews.gameID Inner Join ratings ON ratings.ratingID = games.ratingID WHERE (games.gameID = #ARGUMENTS.gameID#); </cfquery> <cfreturn qGameDetails> </cffunction> This function returns the following JSON: { "COLUMNS": [ "TNAME", "TBRIEF", "GAMEID", "TITLEID", "RDATE", "GNAME", "PABBR", "PNAME", "RSCORE", "RNAME" ], "DATA": [ [ "Dark Void", "Ancient gods known as 'The Watchers,' once banished from our world by superhuman Adepts, have returned with a vengeance.", 154, 54, "January, 19 2010 00:00:00", "Action & Adventure", "PS3", "Playstation 3", 3.3, "14 Anos" ] ] } The problem I'm having is every time I try to append the JSON to the layer #catalog, I get a syntax error that says "missing parenthetical." This is the JavaScript I'm using: $(document).ready(function() { $('#catalog a[href]').each(function() { $(this).qtip( { content: { url: '/gamezilla/resources/components/viewgames.cfc?method=fGameDetails', data: { gameID: $(this).attr('href').match(/gameID=([0-9]+)$/)[1] }, method: 'get' }, api: { beforeContentUpdate: function(content) { var json = eval('(' + content + ')'); content = $('<div />').append( $('<h1 />', { html: json.TNAME })); return content; } }, style: { width: 300, height: 300, padding: 0, name: 'light', tip: { corner: 'leftMiddle', size: { x: 40, y : 40 } } }, position: { corner: { target: 'rightMiddle', tooltip: 'leftMiddle' } } }); }); }); Any ideas where I'm going wrong? I tried many things for several days and I can't find the issue. Many thanks!

    Read the article

  • Getting MySQL work with Entity Framework 4.0

    - by DigiMortal
    Does MySQL work with Entity Framework 4.0? The answer is: yes, it works! I just put up one experimental project to play with MySQL and Entity Framework 4.0 and in this posting I will show you how to get MySQL data to EF. Also I will give some suggestions how to deploy your applications to hosting and cloud environments. MySQL stuff As you may guess you need MySQL running somewhere. I have MySQL installed to my development machine so I can also develop stuff when I’m offline. The other thing you need is MySQL Connector for .NET Framework. Currently there is available development version of MySQL Connector/NET 6.3.5 that supports Visual Studio 2010. Before you start download MySQL and Connector/NET: MySQL Community Server Connector/NET 6.3.5 If you are not big fan of phpMyAdmin then you can try out free desktop client for MySQL – HeidiSQL. I am using it and I am really happy with this program. NB! If you just put up MySQL then create also database with couple of table there. To use all features of Entity Framework 4.0 I suggest you to use InnoDB or other engine that has support for foreign keys. Connecting MySQL to Entity Framework 4.0 Now create simple console project using Visual Studio 2010 and go through the following steps. 1. Add new ADO.NET Entity Data Model to your project. For model insert the name that is informative and that you are able later recognize. Now you can choose how you want to create your model. Select “Generate from database” and click OK. 2. Set up database connection Change data connection and select MySQL Database as data source. You may also need to set provider – there is only one choice. Select it if data provider combo shows empty value. Click OK and insert connection information you are asked about. Don’t forget to click test connection button to see if your connection data is okay. If everything works then click OK. 3. Insert context name Now you should see the following dialog. Insert your data model name for application configuration file and click OK. Click next button. 4. Select tables for model Now you can select tables and views your classes are based on. I have small database with events data. Uncheck the checkbox “Include foreign key columns in the model” – it is damn annoying to get them away from model later. Also insert informative and easy to remember name for your model. Click finish button. 5. Define your classes Now it’s time to define your classes. Here you can see what Entity Framework generated for you. Relations were detected automatically – that’s why we needed foreign keys. The names of classes and their members are not nice yet. After some modifications my class model looks like on the following diagram. Note that I removed attendees navigation property from person class. Now my classes look nice and they follow conventions I am using when naming classes and their members. NB! Don’t forget to see properties of classes (properties windows) and modify their set names if set names contain numbers (I changed set name for Entity from Entity1 to Entities). 6. Let’s test! Now let’s write simple testing program to see if MySQL data runs through Entity Framework 4.0 as expected. My program looks for events where I attended. using(var context = new MySqlEntities()) {     var myEvents = from e in context.Events                     from a in e.Attendees                     where a.Person.FirstName == "Gunnar" &&                             a.Person.LastName == "Peipman"                     select e;       Console.WriteLine("My events: ");       foreach(var e in myEvents)     {         Console.WriteLine(e.Title);     } }   Console.ReadKey(); And when I run it I get the result shown on screenshot on right. I checked out from database and these results are correct. At first run connector seems to work slow but this is only the effect of first run. As connector is loaded to memory by Entity Framework it works fast from this point on. Now let’s see what we have to do to get our program work in hosting and cloud environments where MySQL connector is not installed. Deploying application to hosting and cloud environments If your hosting or cloud environment has no MySQL connector installed you have to provide MySQL connector assemblies with your project. Add the following assemblies to your project’s bin folder and include them to your project (otherwise they are not packaged by WebDeploy and Azure tools): MySQL.Data MySQL.Data.Entity MySQL.Web You can also add references to these assemblies and mark references as local so these assemblies are copied to binary folder of your application. If you have references to these assemblies then you don’t have to include them to your project from bin folder. Also add the following block to your application configuration file. <?xml version="1.0" encoding="utf-8"?> <configuration> ...   <system.data>     <DbProviderFactories>         <add              name=”MySQL Data Provider”              invariant=”MySql.Data.MySqlClient”              description=”.Net Framework Data Provider for MySQL”              type=”MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data,                   Version=6.2.0.0, Culture=neutral,                   PublicKeyToken=c5687fc88969c44d”          />     </DbProviderFactories>   </system.data> ... </configuration> Conclusion It was not hard to get MySQL connector installed and MySQL connected to Entity Framework 4.0. To use full power of Entity Framework we used InnoDB engine because it supports foreign keys. It was also easy to query our model. To get our project online we needed some easy modifications to our project and configuration files.

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • What is wrong with my gtkrc file?

    - by PP
    I have written following gtkrc file from some other theme gtkrc file. This theme is normal theme with buttons using pixmap theme engine. I have also given background image to GtkEntry. Problem is that, When i use this theme my buttons doesn't show text one them and my entry box does not show cursor. Plus in engine "pixmap" tag I need to specify image name with it's path as I have already mentioned pixmap_path on the top of rc file but why I still need to specify the path in file = "xxx" # gtkrc file. pixmap_path "./backgrounds:./icons:./buttons:./emotions" gtk-button-images = 1 #Icon Sizes and color definitions gtk-icon-sizes = "gtk-small-toolbar=16,16:gtk-large-toolbar=24,24:gtk-button=16,16" gtk-toolbar-icon-size = GTK_ICON_SIZE_SMALL_TOOLBAR gtk_color_scheme = "fg_color:#000000\nbg_color:#848484\nbase_color:#000000\ntext_color:#000000\nselected_bg_color:#f39638\nselected_fg_color:#000000\ntooltip_bg_color:#634110\ntooltip_fg_color:#ffffff" style "theme-default" { xthickness = 10 ythickness = 10 GtkEntry::honors-transparent-bg-hint = 0 GtkMenuItem::arrow-spacing = 20 GtkMenuItem::horizontal-padding = 50 GtkMenuItem::toggle-spacing = 30 GtkOptionMenu::indicator-size = {11, 5} GtkOptionMenu::indicator-spacing = {6, 5, 4, 4} GtkTreeView::horizontal_separator = 5 GtkTreeView::odd_row_color = "#efefef" GtkTreeView::even_row_color = "#e3e3e3" GtkWidget::link-color = "#0062dc" # blue GtkWidget::visited-link-color = "#8c00dc" #purple GtkButton::default_border = { 0, 0, 0, 0 } GtkButton::child-displacement-x = 0 GtkButton::child-displacement-y = 1 GtkWidget::focus-padding = 0 GtkRange::trough-border = 0 GtkRange::slider-width = 19 GtkRange::stepper-size = 19 GtkScrollbar::min_slider_length = 36 GtkScrollbar::has-secondary-backward-stepper = 1 GtkPaned::handle_size = 8 GtkMenuBar::internal-padding = 0 GtkTreeView::expander_size = 13 #15 GtkExpander::expander_size = 13 #17 GtkScale::slider-length = 35 GtkScale::slider-width = 17 GtkScale::trough-border = 0 GtkWidget::link-color = "#0062dc" GtkWidget::visited-link-color = "#8c00dc" #purple WnckTasklist::fade-overlay-rect = 0 WnckTasklist::fade-loop-time = 5.0 # 5 seconds WnckTasklist::fade-opacity = 0.5 # final opacity #makes menu only overlap border GtkMenu::horizontal-offset = -1 #removes extra padding at top and bottom of menus. Makes menuitem overlap border GtkMenu::vertical-padding = 0 #set to the same as roundness, used for better hotspot selection of tabs GtkNotebook::tab-curvature = 2 GtkNotebook::tab-overlap = 4 GtkMenuItem::arrow-spacing = 10 GtkOptionMenu ::indicator-size = {11, 5} GtkCheckButton ::indicator-size = 16 GtkCheckButton ::indicator-spacing = 1 GtkRadioButton ::indicator-size = 16 GtkTreeView::horizontal_separator = 2 GtkTreeView::odd_row_color = "#efefef" GtkTreeView::even_row_color = "#e3e3e3" NautilusIconContainer::normal_icon_color = "#ff0000" GtkEntry::inner-border = {0, 0, 0, 0} GtkScrolledWindow::scrollbar-spacing = 0 GtkScrolledWindow::scrollbars-within-bevel = 1 fg[NORMAL] = @fg_color fg[ACTIVE] = @fg_color fg[PRELIGHT] = @fg_color fg[SELECTED] = @selected_fg_color fg[INSENSITIVE] = shade (3.0,@fg_color) bg[NORMAL] = @bg_color bg[ACTIVE] = shade (0.95,@bg_color) bg[PRELIGHT] = mix(0.92, shade (1.1,@bg_color), @selected_bg_color) bg[SELECTED] = @selected_bg_color bg[INSENSITIVE] = shade (1.06,@bg_color) base[NORMAL] = @base_color base[ACTIVE] = shade (0.65,@base_color) base[PRELIGHT] = @base_color base[SELECTED] = @selected_bg_color base[INSENSITIVE] = shade (1.025,@bg_color) text[NORMAL] = @text_color text[ACTIVE] = shade (0.95,@base_color) text[PRELIGHT] = @text_color text[SELECTED] = @selected_fg_color text[INSENSITIVE] = mix (0.675,shade (0.95,@bg_color),@fg_color) } style "theme-entry" { xthickness = 10 ythickness = 10 GtkEntry::inner-border = {10, 10, 10, 10} GtkEntry::progress-border = {10, 10, 10, 10} GtkEntry::icon-prelight = 1 GtkEntry::state-hintt = 1 #GtkEntry::honors-transparent-bg-hint = 1 text[NORMAL] = "#000000" text[ACTIVE] = "#787878" text[INSENSITIVE] = "#787878" text[SELECTED] = "#FFFFFF" engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = FALSE file = "./backgrounds/entry_background.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = FLAT_BOX state = PRELIGHT recolorable = FALSE file = "./backgrounds/entry_background.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = FLAT_BOX state = ACTIVE recolorable = FALSE file = "./backgrounds/entry_background.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } #----------------------------------------------- #Chat Balloon Incoming background. style "theme-event-box-top-in" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_in_top.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-mid-in" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_in_mid.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-bot-in" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_in_bot.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } #----------------------------------------------- #Chat Balloon Outgoing background. style "theme-event-box-top-out" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_out_top.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-mid-out" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_out_mid.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-bot-out" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_out_bot.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-wide" = "theme-default" { xthickness = 2 ythickness = 2 } style "theme-wider" = "theme-default" { xthickness = 3 ythickness = 3 } style "theme-button" { GtkButton::inner-border = {0, 0, 0, 0} GtkWidget::focus-line-width = 0 GtkWidget::focus-padding = 0 bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#ff0000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#ffffff" fg[INSENSITIVE] = "#000000" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" text[NORMAL] = "#ff0000" text[INSENSITIVE] = "#ff0000" text[PRELIGHT] = "#ff0000" text[SELECTED] = "#ff0000" text[INSENSITIVE] = "#434346" text[ACTIVE] = "#ff0000" base[NORMAL] = "#ff0000" base[INSENSITIVE] = "#ff0000" base[PRELIGHT] = "#ff0000" base[SELECTED] = "#ff0000" base[INSENSITIVE] = "#ff0000" engine "pixmap" { image { function = BOX state = NORMAL recolorable = TRUE file = "./buttons/LightButtonAct.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = BOX state = PRELIGHT recolorable = TRUE file = "./buttons/LightButtonRoll.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = BOX state = ACTIVE recolorable = TRUE file = "./buttons/LightButtonClicked.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = BOX state = INSENSITIVE recolorable = TRUE file = "./buttons/LightButtonInact.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-toolbar" { xthickness = 2 ythickness = 2 bg[NORMAL] = shade (1.078,@bg_color) } style "theme-handlebox" { bg[NORMAL] = shade (0.95,@bg_color) } style "theme-scale" { bg[NORMAL] = shade (1.06, @bg_color) bg[PRELIGHT] = mix(0.85, shade (1.1,@bg_color), @selected_bg_color) bg[SELECTED] = "#4d4d55" } style "theme-range" { bg[NORMAL] = shade (1.12,@bg_color) bg[ACTIVE] = @bg_color bg[PRELIGHT] = mix(0.95, shade (1.10,@bg_color), @selected_bg_color) #Arrows text[NORMAL] = shade (0.275,@selected_fg_color) text[PRELIGHT] = @selected_fg_color text[ACTIVE] = shade (0.10,@selected_fg_color) text[INSENSITIVE] = mix (0.80,shade (0.90,@bg_color),@fg_color) } style "theme-notebook" = "theme-wider" { xthickness = 4 ythickness = 4 GtkNotebook::tab-curvature = 5 GtkNotebook::tab-vborder = 1 GtkNotebook::tab-overlap = 1 GtkNotebook::tab-vborder = 1 bg[NORMAL] = "#d2d2d2" bg[ACTIVE] = "#e3e3e3" bg[PRELIGHT] = "#848484" bg[SELECTED] = "#848484" bg[INSENSITIVE] = "#848484" text[PRELIGHT] = @selected_fg_color text[NORMAL] = "#000000" text[ACTIVE] = "#737373" text[SELECTED] = "#000000" text[INSENSITIVE] = "#737373" fg[PRELIGHT] = @selected_fg_color fg[NORMAL] = "#000000" fg[ACTIVE] = "#737373" fg[SELECTED] = "#000000" fg[INSENSITIVE] = "#737373" } style "theme-paned" { bg[PRELIGHT] = shade (1.1,@bg_color) } style "theme-panel" { # Menu fg[PRELIGHT] = @selected_fg_color font_name = "Bold 9" text[PRELIGHT] = @selected_fg_color } style "theme-menu" { xthickness = 0 ythickness = 0 bg[NORMAL] = shade (1.16,@bg_color) bg[SELECTED] = "#ff9a00" text[PRELIGHT] = @selected_fg_color fg[PRELIGHT] = @selected_fg_color } style "theme-menu-item" = "theme-menu" { xthickness = 3 ythickness = 3 base[SELECTED] = "#ff9a00" base[NORMAL] = "#ff9a00" base[PRELIGHT] = "#ff9a00" base[INSENSITIVE] = "#ff9a00" base[ACTIVE] = "#ff9a00" bg[SELECTED] = "#ff9a00" bg[NORMAL] = shade (1.16,@bg_color) } style "theme-menubar" { #TODO } style "theme-menubar-item" = "theme-menu-item" { #TODO bg[SELECTED] = "#ff9a00" } style "theme-tree" { xthickness = 2 ythickness = 1 font_name = "Bold 9" GtkWidget::focus-padding = 0 bg[NORMAL] = "#5a595a" bg[PRELIGHT] = "#5a595a" bg[ACTIVE] = "#5a5a5a" fg[NORMAL] = "#ffffff" fg[ACTIVE] = "#ffffff" fg[SELECTED] = "#ff9a00" fg[PRELIGHT] = "#ffffff" bg[SELECTED] = "#ff9a00" base[SELECTED] = "#ff9a00" base[NORMAL] = "#ff9a00" base[PRELIGHT] = "#ff9a00" base[INSENSITIVE] = "#ff9a00" base[ACTIVE] = "#ff9a00" text[NORMAL] = "#000000" text[PRELIGHT] = "#ff9a00" text[ACTIVE] = "#ff9a00" text[SELECTED] = "#ff9a00" text[INSENSITIVE] = "#434346" } style "theme-tree-arrow" { bg[NORMAL] = mix(0.70, shade (0.60,@bg_color), shade (0.80,@selected_bg_color)) bg[PRELIGHT] = mix(0.80, @bg_color, @selected_bg_color) } style "theme-progressbar" { font_name = "Bold" bg[SELECTED] = @selected_bg_color fg[PRELIGHT] = @selected_fg_color bg[ACTIVE] = "#fe7e00" bg[NORMAL] = "#ffba00" } style "theme-tooltips" = "theme-wider" { font_name = "Liberation sans 10" bg[NORMAL] = @tooltip_bg_color fg[NORMAL] = @tooltip_fg_color text[NORMAL] = @tooltip_fg_color } style "theme-combo" = "theme-button" { xthickness = 4 ythickness = 4 text[NORMAL] = "#fd7d00" text[INSENSITIVE] = "#8a8a8a" base[NORMAL] = "#e0e0e0" base[INSENSITIVE] = "#aeaeae" } style "theme-combo-box" = "theme-button" { xthickness = 3 ythickness = 2 bg[NORMAL] = "#343539" bg[PRELIGHT] = "#343539" bg[ACTIVE] = "#26272b" bg[INSENSITIVE] = "#404145" } style "theme-entry-combo-box" { xthickness = 6 ythickness = 3 text[NORMAL] = "#000000" text[INSENSITIVE] = "#8a8a8a" base[NORMAL] = "#ffffff" base[INSENSITIVE] = "#aeaeae" } style "theme-combo-arrow" = "theme-button" { xthickness = 1 ythickness = 1 } style "theme-view" { xthickness = 0 ythickness = 0 } style "theme-check-radio-buttons" { GtkWidget::interior-focus = 0 GtkWidget::focus-padding = 1 text[NORMAL] = "#ff0000" base[NORMAL] = "#ff0000" text[SELECTED] = "#ffffff" text[INSENSITIVE] = shade (0.625,@bg_color) base[PRELIGHT] = mix(0.80, @base_color, @selected_bg_color) bg[NORMAL] = "#438FC6" bg[INSENSITIVE] = "#aeaeae" bg[SELECTED] = "#ff8a01" } style "theme-radio-buttons" = "theme-button" { GtkWidget::interior-focus = 0 GtkWidget::focus-padding = 1 text[SELECTED] = @selected_fg_color text[INSENSITIVE] = shade (0.625,@bg_color) base[PRELIGHT] = mix(0.80, @base_color, @selected_bg_color) bg[NORMAL] = "#ffffff" bg[INSENSITIVE] = "#dcdcdc" bg[SELECTED] = @selected_bg_color } style "theme-spin-button" { bg[NORMAL] = "#d2d2d2" bg[ACTIVE] = "#868686" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = shade(1.10,@selected_bg_color) bg[INSENSITIVE] = "#dcdcdc" base[NORMAL] = "#ffffff" base[INSENSITIVE] = "#dcdcdc" text[NORMAL] = "#000000" text[INSENSITIVE] = "#aeaeae" } style "theme-calendar" { xthickness = 0 ythickness = 0 bg[NORMAL] = "#676767" bg[PRELIGHT] = shade(0.92,@bg_color) bg[ACTIVE] = "#ff0000" bg[INSENSITIVE] = "#ff0000" bg[SELECTED] = "#ff0000" text[PRELIGHT] = "#000000" text[NORMAL] = "#000000" text[INSENSITIVE]= "#000000" text[SELECTED] = "#ffffff" text[ACTIVE] = "#000000" fg[NORMAL] = "#ffffff" fg[PRELIGHT] = "#ffffff" fg[INSENSITIVE] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" base[NORMAL] = "#ff0000" base[NORMAL] = "#aeaeae" base[INSENSITIVE] = "#00ff00" base[SELECTED] = "#f3720d" base[ACTIVE] = "#f3720d" } style "theme-separator-menu-item" { xthickness = 1 ythickness = 0 GtkSeparatorMenuItem::horizontal-padding = 2 # We are setting the desired height by using wide-separators # There is no other way to get the odd height ... GtkWidget::wide-separators = 1 GtkWidget::separator-width = 1 GtkWidget::separator-height = 5 } style "theme-frame" { xthickness = 10 ythickness = 0 GtkWidget::LABEL-SIDE-PAD = 14 GtkWidget::LABEL-PAD = 23 fg[NORMAL] = "#000000" fg[ACTIVE] = "#000000" fg[PRELIGHT] = "#000000" fg[SELECTED] = "#000000" fg[INSENSITIVE] = "#000000" bg[NORMAL] = "#e2e2e2" bg[ACTIVE] = "#000000" bg[PRELIGHT] = "#000000" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#000000" base[NORMAL] = "#000000" base[ACTIVE] = "#000000" base[PRELIGHT] = "#000000" base[SELECTED] = "#000000" base[INSENSITIVE]= "#000000" text[NORMAL] = "#000000" text[ACTIVE] = "#000000" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[INSENSITIVE]= "#000000" } style "theme-textview" { text[NORMAL] = "#000000" text[ACTIVE] = "#000000" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[INSENSITIVE] = "#434648" bg[NORMAL] = "#ffffff" bg[ACTIVE] = "#ffffff" bg[PRELIGHT] = "#ffffff" bg[SELECTED] = "#ffffff" bg[INSENSITIVE] = "#ffffff" fg[NORMAL] = "#ffffff" fg[ACTIVE] = "#ffffff" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[INSENSITIVE] = "#ffffff" base[NORMAL] = "#ffffff" base[ACTIVE] = "#ffffff" base[PRELIGHT] = "#ffffff" base[SELECTED] = "#ff9a00" base[INSENSITIVE] = "#ffffff" } style "theme-clist" { text[NORMAL] = "#000000" text[ACTIVE] = "#000000" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[INSENSITIVE] = "#434648" bg[NORMAL] = "#353438" bg[ACTIVE] = "#ff9a00" bg[PRELIGHT] = "#ff9a00" bg[SELECTED] = "#ff9a00" bg[INSENSITIVE] = "#ffffff" fg[NORMAL] = "#000000" fg[ACTIVE] = "#ff9a00" fg[PRELIGHT] = "#ff9a00" fg[SELECTED] = "#fdff00" fg[INSENSITIVE] = "#757575" base[NORMAL] = "#ffffff" base[ACTIVE] = "#fdff00" base[PRELIGHT] = "#000000" base[SELECTED] = "#fdff00" base[INSENSITIVE] = "#757575" } style "theme-label" { bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#000000" fg[INSENSITIVE] = "#434346" fg[PRELIGHT] = "#000000" fg[SELECTED] = "#000000" fg[ACTIVE] = "#000000" text[NORMAL] = "#ffffff" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#ffffff" text[SELECTED] = "#ffffff" text[ACTIVE] = "#ffffff" base[NORMAL] = "#000000" base[INSENSITIVE] = "#00ff00" base[PRELIGHT] = "#0000ff" base[ACTIVE] = "#f39638" } style "theme-button-label" { bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#ffffff" fg[INSENSITIVE] = "#434346" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" text[NORMAL] = "#000000" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[ACTIVE] = "#000000" base[NORMAL] = "#000000" base[INSENSITIVE] = "#00ff00" base[PRELIGHT] = "#0000ff" base[SELECTED] = "#ff00ff" base[ACTIVE] = "#ffff00" } style "theme-button-check-radio-label" { bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#000000" fg[INSENSITIVE] = "#434346" fg[PRELIGHT] = "#000000" fg[SELECTED] = "#000000" fg[ACTIVE] = "#000000" text[NORMAL] = "#ffffff" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#ffffff" text[SELECTED] = "#000000" text[ACTIVE] = "#ffffff" base[NORMAL] = "#000000" base[INSENSITIVE] = "#00ff00" base[PRELIGHT] = "#0000ff" base[SELECTED] = "#ff00ff" base[ACTIVE] = "#ffff00" } style "theme-table" { bg[NORMAL] = "#848484" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" } style "theme-iconview" { GtkWidget::focus-line-width=1 bg[NORMAL] = "#000000" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#c19676" bg[SELECTED] = "#c19676" bg[INSENSITIVE] = "#969696" fg[NORMAL] = "#ffffff" fg[INSENSITIVE] = "#ffffff" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" text[NORMAL] = "#000000" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[ACTIVE] = "#000000" base[NORMAL] = "#ffffff" base[INSENSITIVE] = "#434346" base[PRELIGHT] = "#FAD184" base[SELECTED] = "#FAD184" base[ACTIVE] = "#FAD184" } # Set Widget styles class "GtkWidget" style "theme-default" class "GtkScale" style "theme-scale" class "GtkRange" style "theme-range" class "GtkPaned" style "theme-paned" class "GtkFrame" style "theme-frame" class "GtkMenu" style "theme-menu" class "GtkMenuBar" style "theme-menubar" class "GtkEntry" style "theme-entry" class "GtkProgressBar" style "theme-progressbar" class "GtkToolbar" style "theme-toolbar" class "GtkSeparator" style "theme-wide" class "GtkCalendar" style "theme-calendar" class "GtkTable" style "theme-table" widget_class "*<GtkMenuItem>*" style "theme-menu-item" widget_class "*<GtkMenuBar>.<GtkMenuItem>*" style "theme-menubar-item" widget_class "*<GtkSeparatorMenuItem>*" style "theme-separator-menu-item" widget_class "*<GtkLabel>" style "theme-label" widget_class "*<GtkButton>" style "theme-button" widget_class "*<GtkButton>*<GtkLabel>*" style "theme-button-label" widget_class "*<GtkCheckButton>" style "theme-check-radio-buttons" widget_class "*<GtkToggleButton>.<GtkLabel>*" style "theme-button" widget_class "*<GtkCheckButton>.<GtkLabel>*" style "theme-button-check-radio-label" widget_class "*<GtkRadioButton>.<GtkLabel>*" style "theme-button-check-radio-label" widget_class "*<GtkTextView>" style "theme-textview" widget_class "*<GtkList>" style "theme-textview" widget_class "*<GtkCList>" style "theme-clist" widget_class "*<GtkIconView>" style "theme-iconview" widget_class "*<GtkHandleBox>" style "theme-handlebox" widget_class "*<GtkNotebook>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkEventBox>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkDrawingArea>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkLayout>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkViewport>" style "theme-notebook" widget_class "*<GtkNotebook>.<GtkLabel>*" style "theme-notebook" #for tabs # Combo Box Stuff widget_class "*<GtkCombo>*" style "theme-combo" widget_class "*<GtkComboBox>*<GtkButton>" style "theme-combo-box" widget_class "*<GtkComboBoxEntry>*" style "theme-entry-combo-box" widget_class "*<GtkSpinButton>*" style "theme-spin-button" widget_class "*<GtkSpinButton>*<GtkArrow>*" style:highest "theme-tree-arrow" # Tool Tips Stuff widget "gtk-tooltip*" style "theme-tooltips" # Tree View Stuff widget_class "*<GtkTreeView>.<GtkButton>*" style "theme-tree" widget_class "*<GtkCTree>.<GtkButton>*" style "theme-tree" widget_class "*<GtkList>.<GtkButton>*" style "theme-tree" widget_class "*<GtkCList>.<GtkButton>*" style "theme-tree" # For arrow bg widget_class "*<GtkTreeView>.<GtkButton>*<GtkArrow>" style "theme-tree-arrow" widget_class "*<GtkCTree>.<GtkButton>*<GtkArrow>" style "theme-tree-arrow" widget_class "*<GtkList>.<GtkButton>*<GtkArrow>" style "theme-tree-arrow" ####################################################### ## GNOME specific ####################################################### widget_class "*.ETree.ECanvas" style "theme-tree" widget_class "*.ETable.ECanvas" style "theme-tree" style "panelbuttons" = "theme-button" { # As buttons are draw lower this helps center text xthickness = 3 ythickness = 3 } widget_class "*Panel*<GtkButton>*" style "panelbuttons" style "murrine-fg-is-text-color-workaround" { text[NORMAL] = "#000000" text[ACTIVE] = "#fdff00" text[SELECTED] = "#fdff00" text[INSENSITIVE] = "#757575" bg[SELECTED] = "#b85e03" bg[ACTIVE] = "#b85e03" bg[SELECTED] = "#b85e03" fg[SELECTED] = "#ffffff" fg[NORMAL] = "#ffffff" fg[ACTIVE] = "#ffffff" fg[INSENSITIVE] = "#434348" fg[PRELIGHT] = "#ffffff" base[SELECTED] = "#ff9a00" base[NORMAL] = "#ffffff" base[ACTIVE] = "#ff9a00" base[INSENSITIVE] = "#434348" base[PRELIGHT] = "#ffffff" } widget_class "*.<GtkTreeView>*" style "murrine-fg-is-text-color-workaround" style "murrine-combobox-text-color-workaround" { text[NORMAL] = "#FFFFF" text[PRELIGHT] = "#FFFFF" text[SELECTED] = "#FFFFF" text[ACTIVE] = "#FFFFF" text[INSENSITIVE] = "#FFFFF" } widget_class "*.<GtkComboBox>.<GtkCellView>" style "murrine-combobox-text-color-workaround" style "murrine-menuitem-text-is-fg-color-workaround" { bg[NORMAL] = "#0000ff" text[NORMAL] = "#ffffff" text[PRELIGHT] = "#ffffff"#"#FD7D00" text[SELECTED] = "#ffffff"#"#ff0000"# @selected_fg_color text[ACTIVE] = "#ffffff"#"#ff0000"# "#FD7D00" text[INSENSITIVE] = "#ffffff"#ff0000"# "#414143" } widget "*.gtk-combobox-popup-menu.*" style "murrine-menuitem-text-is-fg-color-workaround"

    Read the article

  • Bootstrap responsive CSS [migrated]

    - by savolai
    I have a four column design and I am using Bootstrap. The design renders fine in a single column in mobile devices, but in "(min-width: 768px) and (max-width: 979px)", I get four columns though there is room for only two. So clearly, the rows/spans setup would need to be rethought for those sizes. The only way I can imagine of doing this is to have semantic CSS classes used in the HTML and only including grid classes in the CSS using LESS, and then depending on screen size, including different grid classes to achieve four or two column layout. Not sure if this would work either though. Is this the way to go with, or am I thinking this too complicatedly? Thanks! Also at: https://groups.google.com/forum/#!topic/twitter-bootstrap/R5jEp0oQ_-E

    Read the article

  • VSFTPD Unable to set write permissions on folder

    - by Frank Astin
    I've just set up my first FTP server with VSFTPD on cent os . I can connect to it fine using a user in the group ftp-users but I get read only access . I've tried several different CHMOD codes on the folder (even 777) all to no avail . This is the tutorial I used to set up the server http://tinyurl.com/73pyuxz hopefully you'll be able to see something I missed. Thanks in advance . Requested Config File : # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • vsftpd not allowing uploads. 550 response

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • Centos 6.3 vsftp unable to upload file to apache webserver

    - by user148648
    I am new to Centos, I did work with Sun Solaris and upload files to Apache web server before. I create an end user account and manage to ftp using command prompt to the server, error message is '226 Transfer Done (but failed to open directory). Content of my vsftpd.conf as below # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # ** may need to comment it back # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 local_umask=077 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # *** maybe to comment it back!!! # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # ** may need to comment it back!!! # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Warning, only for authorize login. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list local_root=/var/www # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • what are the benefits of closure, primarily for PHP?

    - by Patrick
    I am beginning the process of moving code over to PHP 5.3 and one of the most highly touted features of PHP 5.3 is the ability to use closures. My understanding of closures is that they allow anonymous functions, can be assigned to variable names, and have interesting scoping abilities. From my point of view the only seeming benefits in real world applications is the reduction of clutter in the namespace because closures are anonymous. Am I wrong in this? Should I be trying to put closures wherever I code? EDIT: I have already read this post on Javascript closures.

    Read the article

  • A Semantic Model For Html: TagBuilder and HtmlTags

    - by Ryan Ohs
    In this post I look into the code smell that is HTML literals and show how we can refactor these pesky strings into a friendlier and more maintainable model.   The Problem When I started writing MVC applications, I quickly realized that I built a lot of my HTML inside HtmlHelpers. As I did this, I ended up moving quite a bit of HTML into string literals inside my helper classes. As I wanted to add more attributes (such as classes) to my tags, I needed to keep adding overloads to my helpers. A good example of this end result is the default html helpers that come with the MVC framework. Too many overloads make me crazy! The problem with all these overloads is that they quickly muck up the API and nobody can remember exactly what order the parameters go in. I've seen many presenters (including members of the ASP.NET MVC team!) stumble before realizing that their view wasn't compiling because they needed one more null parameter in the call to Html.ActionLink(). What if instead of writing Html.ActionLink("Edit", "Edit", null, new { @class = "navigation" }) we could do Html.LinkToAction("Edit").Text("Edit").AddClass("navigation") ? Wouldn't that be much easier to remember and understand?  We can do this if we introduce a semantic model for building our HTML.   What is a Semantic Model? According to Martin Folwer, "a semantic model is an in-memory representation, usually an object model, of the same subject that the domain specific language describes." In our case, the model would be a set of classes that know how to render HTML. By using a semantic model we can free ourselves from dealing with strings and instead output the HTML (typically via ToString()) once we've added all the elements and attributes we desire to the model. There are two primary semantic models available in ASP.NET MVC: MVC 2.0's TagBuilder and FubuMVC's HtmlTags.   TagBuilder TagBuilder is the html builder that is available in ASP.NET MVC 2.0. I'm not a huge fan but it gets the job done -- for simple jobs.  Here's an overview of how to use TagBuilder. See my Tips section below for a few comments on that example. The disadvantage of TagBuilder is that unless you wrap it up with our own classes, you still have to write the actual tag name over and over in your code. eg. new TagBuilder("div") instead of new DivTag(). I also think it's method names are a little too long. Why not have AddClass() instead of AddCssClass() or Text() instead of SetInnerText()? What those methods are doing should be pretty obvious even in the short form. I also don't like that it wants to generate an id attribute from your input instead of letting you set it yourself using external conventions. (See GenerateId() and IdAttributeDotReplacement)). Obviously these come from Microsoft's default approach to MVC but may not be optimal for all programmers.   HtmlTags HtmlTags is in my opinion the much better option for generating html in ASP.NET MVC. It was actually written as a part of FubuMVC but is available as a stand alone library. HtmlTags provides a much cleaner syntax for writing HTML. There are classes for most of the major tags and it's trivial to create additional ones by inheriting from HtmlTag. There are also methods on each tag for the common attributes. For instance, FormTag has an Action() method. The SelectTag class allows you to set the default option or first option independent from adding other options. With TagBuilder there isn't even an abstraction for building selects! The project is open source and always improving. I'll hopefully find time to submit some of my own enhancements soon.   Tips 1) It's best not to have insanely overloaded html helpers. Use fluent builders. 2) In html helpers, return the TagBuilder/tag itself (not a string!) so that you can continue to add attributes outside the helper; see my first sample above. 3) Create a static entry point into your builders. I created a static Tags class that gives me access all the HtmlTag classes I need. This way I don't clutter my code with "new" keywords. eg. Tags.Div returns a new DivTag instance. 4) If you find yourself doing something a lot, create an extension method for it. I created a Nest() extension method that reads much more fluently than the AddChildren() method. It also accepts a params array of tags so I can very easily nest many children.   I hope you have found this post helpful. Join me in my war against HTML literals! I’ll have some more samples of how I use HtmlTags in future posts.

    Read the article

  • Low coupling processing big quantities of data

    - by vitalik
    Usually I achieve low coupling by creating classes that exchange lists, sets, and maps between them. Now I am developing a batch application and I can't put all the data inside a data structure because there isn't enough memory. I have to read and process one chunk of data and then going to the next one. So having low coupling is much more difficult because I have to check somewhere if there is still data to read, etc. What I am using now is: Source - Process - Persist The classes that process have to ask to the Source classes if there are more rows to read. What are the best practices and or useful patterns in such situations? I hope I am explaining myself, if not tell me.

    Read the article

  • Is there a "golden ratio" in coding?

    - by badallen
    My coworkers and I often come up with silly ideas such as adding entries to Urban Dictionary that are inappropriate but completely make sense if you are a developer. Or making rap songs that are about delegates, reflections or closures in JS... Anyhow, here is what I brought up this afternoon which was immediately dismissed to be a stupid idea. So I want to see if I can get redemptions here. My idea is coming up with a Golden Ratio (or in the neighborhood of) between the number of classes per project versus the number of methods/functions per class versus the number of lines per method/function. I know this is silly and borderline, if not completely, useless, but just think of all the legacy methods or classes you have encountered that are absolutely horrid - like methods with 10000 lines or classes with 10000 methods. So Golden Ratio, anyone? :)

    Read the article

  • How do I prove or disprove "god" objects are wrong?

    - by honestduane
    Problem Summary: Long story short, I inherited a code base and an development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects "because its easier" and this means I would not be allowed to re-factor. I pushed back citing my years of dev experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammo to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I "know" I cant prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. Question: What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of "God" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? Research I have already done: I have a number of books here and I have searched their indexes for the use of the words "god object" and "god class". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in 2 books per the below, but I want more I can use. I checked the Wikipedia page for "God Object" and its currently a stub with little reference links so although I personally agree with that it says, It doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that "it was once thought to be bad but nobody could prove it, and now modern software says "god" objects are good to use". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. In Robert C Martin's "Agile Principles, Patterns, and Practices in C#" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states "Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function." -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). In Robert C Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" page 136 (And only this page) talks about the "God class" and calls it out as a prime example of a violation of the "classes should be small" rule he uses to promote the Single Responsibility Principle" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one guy, my desire to not use "God Classes" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? God Objects and Object Oriented Programming and Design: The more I think of this the more I think this is more something you learn when you study OOP and its never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), The problem is I "know" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. Conclusion: I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Best practice to propagate preferences of application

    - by Shebuka
    What is your approach with propagation to all classes/windows of preferences/settings of your application? Do you share the preference_manager class to all classes/windows who need it or you make variables in each classes/windows and update them manually each time setting are changed? Currently I have a PreferencesInterface class that hold all preferences and is responsible to default all values with a dedicated method called on create and when needed, all values are public, so non getters/setters, also it have virtual SavePreferences/LoadPreferences methods. Then I have PreferencesManager that extends from PreferencesInterface and is responsible for actually implementation of SavePreferences/LoadPreferences. I've made this basically for cross-platform so that every platform can have a different implementation of actual storage (registry, ini, plist, xml, whatever).

    Read the article

  • CodePlex Daily Summary for Monday, March 26, 2012

    CodePlex Daily Summary for Monday, March 26, 2012Popular ReleasesQuick Performance Monitor: Version 1.8.1: Added option to set main window to be 'Always On Top'. Use context (right-click) menu on graph to toggle..Net Rest API for Kayako Fusion 4: kayako_rest_api_2012.03.26: Added ability to search for users via organisation/email. This is much quicker than getting all users then filtering.GeoMedia PostGIS data server: PostGIS GDO 1.0.1.1: This is a new version of GeoMeda PostGIS data server which supports user rights. It means that only those feature classes, which the current user has rights to select, are visible in GeoMedia. Issues fixed in this release Fixed a problem when gdo.gfeaturesbase table has been visible in GeoMedia. To hide this table, run the previous version of Database Utilities and uncheck this table in the feature classes list. Then load the new release. Fixed a problem when coordinate system list has not...Silverlight 4 & 5 Persian DatePicker: Silverlight 4 and 5 Persian DatePicker: Added Silverlight 5 support.Y.Music: Y.Music v.1.0: ?????? ?????? ?????????. ????????: ????? ???? ????, ??????? ?????? ? ??? - Beta.Asp.NET Url Router: v1.0: build for .net 2.0 and .net 4.0menu4web: menu4web 0.0.3: menu4web 0.0.3ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Final: This release installs both the ArcGIS Editor for OSM Server Component and/or ArcGIS Editor for OSM Desktop components. The Desktop tools allow you to download data from the OpenStreetMap servers and store it locally in a geodatabase. You can then use the familiar editing environment of ArcGIS Desktop to create, modify, or delete data. Once you are done editing, you can post back the edit changes to OSM to make them available to all OSM users. The Server Component allows you to quickly create...Craig's Utility Library: Craig's Utility Library 3.1: This update adds about 60 new extension methods, a couple of new classes, and a number of fixes including: Additions Added DateSpan class Added GenericDelimited class Random additions Added static thread friendly version of Random.Next called ThreadSafeNext. AOP Manager additions Added Destroy function to AOPManager (clears out all data so system can be recreated. Really only useful for testing...) ORM additions Added PagedCommand and PageCount functions to ObjectBaseClass (same as M...XNA Electric Effect: Jason Electric Effect v1.1: The library now includes 3 effect types: Line, Bezier, CatmullRom, providing different look and feel.DotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Change default Share-site group SharePoint Online (Office 365): Change default Share-site group SharePoint Online: As default when we share a site collection or site with external users, SharePoint Online show default SharePoint groups which are Visitors and Members. By using this feature, you will get a link which you can use to customize the default groups to your custom groups and other default groups.Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.Working with Social Data: Tag Cloud Customization: http://swatipoint.blogspot.com/2011/10/sharepoint-2010-social-featurestagging.htmlWebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...New ProjectsASIVeste: No description availableAuthor-it Sync Headings Plug-in: Author-it plug-in that allows the user to synchronize the Print, Help, and Web headings with the Description for each selected topic.BlogEngine.Web: BlogEngine.Web is a BlogEngine.Net converted to use Web Application Project model (WAP).Code Writer Helper: A quick solution to help code generator writers.CodeUITest: Practise CodeUI automation.DAX Studio: Excel Add-In for PowerPivot and Analysis Services Tabular projects that will include an Object Browser, query editing and execution, formula and measure editing ,syntax highlighting, integrated tracing and query execution breakdowns.Fated: Fated is an isometric-viewed, tile-based tactical RPG developed in C# using XNA to be deployed to XBox. This includes a character generation core, graphics engine, and storyline parser.iSufe???: “iSufe???”??????????????????????????????。???????????、????、?????????,??????????????。??,????iOS/Android/WAP???????,???????????????。????GPLv2??,?????????????。Kinect test project: Basic project for my kinect test applicationLoLTimers: LoLTimers by Christian Schubert 2012. Version 1.0.0.0 This is a small app that lets you keep track of the most important creep camp cooldowns. Developed in Visual Studio C#.London Priority Security Services Ltd: LPSS - London Priority Security Services LtdNMCNPM: code nhóm nmcnpmOffice 365 Anonymous Access Manager Sandbox Solution: The sandbox solution enables you to manage anonymous access of lists on Office 365. It allows setting read, modifying and adding rights. Additionally the configuration page adds the necessary events to be able to use moderations, when anonymous users are creating a list item. The second feature in the solution enables anonymous access on blogs sites, it allows to enable anonymous users to comment on a blog.Office 365 Google Analytics: This sandbox solution enables google analytics everywhere in your site collection. This allows you to use the google analytics reporting on all your Office 365 sites.Office 365 Mobile Access Enables for Public Sites & Blogs: This sandbox solution enables mobile access on Office 365 sites.OwnMFCSolution: MFC test solution.People Data Generator: Need to load a bunch of test data to represent people (e.g. name, address, phone, etc.)? Wish it looked realistic? People Data Generator is what you need. Features: *Realistic names *Realistic addresses, using real towns and postal codes *Realistic phone numbers and emails *Very ExtensibleProventi: Met dit programma kan je je voorraad van je onderneming beheren. Dit programma zal in eerste instantie gebruikt worden binnen de minionderneming Proventi. Het programma is geschreven in VB.Net en maakt gebruik van SQL Server CE voor de gegevensopslag.qCommerce: ??????????? ???????, ???????????? ??? ????? qSoftwareQuanLyOTo: Ð? án môn h?c C# qu?n lý garage ô tôRoyaSoft.ir Resources: i am use this project for my personal web site :)SGPF: The team does not have nothing to declare here!SharePoint 2010 Autocomplete Lookup Field: Autocomplete Lookup field allows type ahead functionality while entering lookup values in list items.Sharing Photos using SignalR: An MVC application using SignalR that can be used to share photos between friends and get realtime updates. An user connected to the website can upload a photo which will be automatically broadcasted to all clients connected at that point.Sistema Hoteleiro: Sistema Hoteleiro é o meu trabalho final da disciplina Arquitetura de Aplicativos Ambiente .NET da 4a turma do curso de pós-graduação de especialização em Arquitetura de Sistemas Distribuídos oferecido pelo Instituto de Educação Continuada da Pontifícia Universidade Católica de MSoftware Revolution: This project is core information site of Software Revolution named company which provides software solutions.tgryi: tgyrivbWSUS: Really decide which and when to install updates from a centralized server, globally or per host : - installation schedule - updates to install - email results - configure extra Windows Update parameters Works with WSUS server or Windows Update from Microsoft. See README.txt for more informations ! :) Current official website is http://sourceforge.net/projects/vbwsus/XamlCombine: Combines multiple XAML resource dictionaries in one. Replaces DynamicResources to StaticResources. And sort them in order of usage.XNA Shader-free Linear Burn effect: Sample demonstrating a Linear Burn effect in XNA without using custom shaderszhCms: zhCmszhtest: my test project

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Implementing separation of concerns via MVC

    - by user2368481
    I'm creating a question to see if my understanding of MVC separation is correct, I haven't been able to find a clear answer anywhere online. So is this the right way to implement it (in Java): I would have 3 .java files, one each for Model, Controller, View. I would put all the classes related to Model in the Model.java like so: //Model.java { public class Model //class fields public Model(); public ModelClassA(); public ModelClassB(); public ModelClassC(); } With the ModelClasses being any class that I consider belonging to the Model. Is it correct to have the classes within the Model Class, as I have read that nested classes should be avoided where possible.

    Read the article

  • vsftpd not allowing uploads. 550 response.

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • Subclassing to avoid line length

    - by Super User
    The standard line length of code is 80 characters per line. This is accepted and followed by the most of programmers. I working on a state machine of a character and is necessary for me follow this too. I have four classes who pass this limit. I can subclass each class in two more and then avoid the line length limit. class Stand class Walk class Punch class Crouch The new classes would be StandLeft, StandRight and so on. Stand, Walk, Punch and Crouch would be then abstract classes. The question if there is a limit for the long of the hierarchies tree or this is depends of the case.

    Read the article

  • UML Class diagrams with Java packages?

    - by loosebruce
    I am trying to model in UML 2.0 a Java servlet application that has three classes Servlet class; essentially a main class that acts as the controller DatabaseLogic; contains methods for database operations XMLBuilder; builds an XML from a query result string The classes use a variety of packages from the Java library. I am unsure how to model this in UML Do I have to create a package and show which libraries are used for each individual class or can I just have one large package in the diagram with all the libraries showing which classes have dependencies on which. As per this diagram This is my first time working with java properly (im a C++ guy) Apart from being a bit messy , is this a correct UML representation of the system I described? Does a Package in UML mean the same as a Package in Java?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >