Danack's blogCoding, photography and brain dumps.
http://blog.basereality.com/
2022-05-21T03:45:48+00:00text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydEU decision making incoherency.
/blog/32/EU_decision_making_incoherency
One of the things I'm interested in is how relatively smart people can still make terrible decisions.
Since the middle of January there have been a, quite frankly, bizarre series of events concerning the EU commission, statements they have made and actions they have taken.
A lot of the reporting of these events, and the reporting of the previous events that lead to them have been quite poor (imho), so this blog post started as me trying to just get the facts down. However I now have a strong suspicion about the sequence of events that lead to the bizarre events of the last week.
<!-- end_preview -->
By the way, if you are at all interested in how organisations make poor decisions, I strongly recommend acquiring a copy of [Systemantics](https://en.wikipedia.org/wiki/Systemantics).
## How did we get here
First, a table of when the UK and EU placed initial orders for vaccines, the time difference between those two, and the status of the vaccines.
<table style="vertical-align: top; width:900px">
<thead>
<tr>
<th>Vaccine</th>
<th>UK</th>
<th>EU</th>
<th>Difference</th>
<th>Vaccine status</th>
</tr>
</thead>
<tbody>
<tr>
<td>Astrazeneca</td>
<td><a href="https://www.cambridgeindependent.co.uk/business/astrazeneca-to-begin-supplying-100-million-doses-of-covid-19-vaccine-to-uk-from-september-if-trials-succeed-9110892/">100 million 25 May 2020</a></td>
<td><a href="https://ec.europa.eu/cyprus/news/20200827_2_en">300 million 27 August 2020</a></td>
<td>3 months</td>
<td>Approved</td>
</tr>
<tr>
<td>CureVac</td>
<td>No order</td>
<td><a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2136">225 million 17 November 2020</a></td>
<td>N/A</td>
<td>Phase 2b/3</td>
</tr>
<tr>
<td>GSK/Sanofi</td>
<td><a href="https://pharmaphorum.com/news/uk-places-order-for-60m-doses-of-sanofi-gsks-covid-19-vaccine/">60 million July 29, 2020</a></td>
<td><a href="https://www.sanofi.com/en/media-room/press-releases/2020/2020-09-18-12-52-46">300 million September 18, 2020</a></td>
<td>7 weeks</td>
<td>Ineffective </td>
</tr>
<tr>
<td>Johnson and Johnson/Janssen</td>
<td><a href="https://www.biopharma-reporter.com/Article/2020/08/17/UK-strikes-deals-with-J-J-Novavax-to-source-90m-COVID-19-vaccines">30 million 17th Aug 2020</a></td>
<td><a href="https://www.pharmaceutical-technology.com/news/eu-jj-covid-vaccine/">400 million 22nd Oct 2020</a></td>
<td>9 weeks</td>
<td>Phase 3 trials</td>
</tr>
<tr>
<td>Moderna</td>
<td><a href="https://www.gov.uk/government/news/government-secures-5-million-doses-of-moderna-vaccine">5 million 16th Nov 2020</a></td>
<td><a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2200">80 million 25th Nov 2020</a></td>
<td>9 days</td>
<td>Phase 3 trials</td>
</tr>
<tr>
<td>Novavax</td>
<td><a href="https://www.biopharma-reporter.com/Article/2020/08/17/UK-strikes-deals-with-J-J-Novavax-to-source-90m-COVID-19-vaccines">60 million 17th Aug 2020</a></td>
<td>No order</td>
<td>N/A</td>
<td>Phase 3 trials</td>
</tr>
<tr>
<td>Pfizer/BioNTech</td>
<td><a href="https://www.bloomberg.com/news/articles/2020-07-20/u-k-orders-90-million-vaccine-doses-from-pfizer-valneva">30 million July 20, 2020</a></td>
<td><a href="https://www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-reach-agreement-supply-eu-200-million">200 million November 11, 2020</a></td>
<td>4 months</td>
<td>approved</td>
</tr>
<tr>
<td>Valneva</td>
<td><a href="https://www.bloomberg.com/news/articles/2020-07-20/u-k-orders-90-million-vaccine-doses-from-pfizer-valneva">60 million July 20, 2020</a></td>
<td>No order</td>
<td>N/A</td>
<td>Phase 1/2 trials</td>
</tr>
</tbody>
</table>
The EU is consistently far behind the UK in placing orders for the vaccines.
By the time the EU placed it's first order for any type of vaccine, the UK had placed orders for six different types. This diversification in orders reduced the risk of the failure of any single vaccine to work from severely affecting the ability of the UK to vaccinate it's population.
Several EU countries tried to order the AstraZeneca vaccine in June, but [were prevented from doing so by the EU](https://www.itv.com/news/2021-01-26/covid-vaccine-what-is-the-dispute-between-the-eu-and-astrazeneca):
> The following month AstraZeneca reached a preliminary agreement with Germany, the Netherlands, France and Italy, a group known as the Inclusive Vaccine Alliance, based on the agreement with the UK. The announcement was June 13.
>
> But, the EU insisted that the Inclusive Vaccine Alliance could not formalise the deal.
>
> The European Commission insisted it should take over the contract negotiations on behalf of the whole EU.
The EU communique announcing [this would be delicious](https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1597339415327&uri=CELEX:52020DC0245), if it weren't for the human suffering that has resulted from it:
> In order to scale this approach up to cover the whole EU, the Commission proposes to run a central procurement process, which creates a number of important advantages. In particular, all EU Member States will be able to benefit from an option to purchase vaccines via a single procurement action. This process also offers vaccine producers a significantly simplified negotiation process with a single point of contact, thus reducing costs for all. Centralising vaccine procurement at EU level has the merit of speed and efficiency by comparison with 27 separate processes.
There have also been mutterings that the EU was slow in ordering vaccines at least in part to provide time for the GSK/Sanofi vaccine to be approved. Unfortunately the Sanofi vaccine has not been shown to work, and some of GSK/Sanofi facilities that were going to be used to make that vaccine, are now being converted to make the Pfizer/BioNTech vaccine.
It's unlikely that we'll ever get definitive proof of the EU commission tipping the balance for Sanofi, but it is tragically comic that those plants are now going to be used for the offer that the EU turned down earlier in 2020:
* "[The European Union was offered an extra 500 million doses of the vaccine now making it’s way across Europe, but turned the offer down](https://www.brusselstimes.com/news/belgium-all-news/147172/vaccines-eu-turned-down-offer-of-500-million-extra-doses/)"
* "[The European Union will take up its option to buy up to 100 million more doses of Pfizer and BioNTech’s COVID-19 vaccine after turning down an opportunity in July for a much bigger deal](
https://www.reuters.com/article/health-coronavirus-eu-pfizer/eu-to-order-more-pfizer-vaccine-after-declining-earlier-offer-idINKBN28R19D?edition-redirect=in)"
All of this has left people across the EU really quite angry, as they feel that the EU commission has failed to act competently.
## Timeline of events
This isn't the full sequence of relevant events, but it contains the key causes and reactions by the EU commission.
### 15th January
Pfizer announced that shipments of its vaccine [would be lower for a few weeks](https://www.france24.com/en/europe/20210115-covid-19-pfizer-temporarily-reduces-vaccine-deliveries-to-europe), so that production at the factory could be ramped up for increased shipments from late February.
### 23rd January
AstraZeneca announced delays due to problems at a plant run by a partner company (Novasep) in
Belgium, so they wouldn't be able to meet [their production targets](https://www.france24.com/en/europe/20210123-astrazeneca-says-covid-19-vaccine-deliveries-to-europe-will-be-lower-than-expected).
### 25th January
Commissioner Kyriakides [made comments](https://ec.europa.eu/commission/presscorner/detail/en/speech_21_211) in response to AstraZeneca's announcement of lower production than hoped:
> Last Friday, the company AstraZeneca surprisingly informed the Commission and the European Union Member States that it intends to supply considerably fewer doses in the coming weeks than agreed and announced.
>
> This new schedule is not acceptable to the European Union.
>
> The European Union wants to know exactly which doses have been produced by AstraZeneca and where exactly so far and if or to whom they have been delivered. ...The answers of the company have not been satisfactory so far. That's why a second meeting is scheduled for tonight.
### 27th January
Commissioner Stella Kyriakides makes a series of remarks that are filled with falsehoods. [Youtube version](https://youtu.be/5OPOQPS0F-8?t=85) or [text vesion](https://ec.europa.eu/commission/presscorner/detail/en/SPEECH_21_267).
> The view that the company is not obliged to deliver because we signed a ‘best effort' agreement is neither correct nor is it acceptable.
> We signed an Advance Purchase Agreement for a product which at the time did not exist, and which still today is not yet authorised. And we signed it precisely to ensure that the company builds the manufacturing capacity to produce the vaccine early, so that they can deliver a certain volume of doses the day that it is authorised.
> "Not being able to ensure manufacturing capacity is against the letter and spirit of our agreement."
> And there's also no hierarchy of the four production plants named in the Advance Purchase Agreement. Two are located in the EU and two are located in UK.
I'll go over the contract details below, but these statements are false.
### 28th January
Belgium authorities [inspect the Novasep plant](https://www.oxfordmail.co.uk/news/national/19045879.vaccine-factory-inspected-belgium-amid-eu-dispute-astrazeneca/) where the AstraZeneca vaccine is being produced, that is experiencing production difficulties.
### 29th January
Eric Mamer, the European Commission Chief Spokesperson announced that the EU had published a redacted version of the contract with AstraZeneca. He then went on to make comments about [the production facilities](https://youtu.be/OfcEHEYerwM?t=676):
> I think you should take your time to go through the contract which is many many pages long to analyze it in detail, clearly I am not in a position to provide extensive comment on the specific clauses of the contract, we might come to that at a later stage, but we have always said here that indeed there are a number of plants that are mentioned in the contract that we have with AstraZeneca some of which are located in the United Kingdom and that it is foreseen that these plants will contribute to the effort of AstraZeneca to deliver doses to the European Union. There is absolutely no question for us that this is what the contract specifies.
And then also made comments about the expectations for [delivery schedules](https://youtu.be/OfcEHEYerwM?t=825):
> In our mind there is absolutely no doubt that we have a firm commitment with the company as with all other companies to deliver doses according to specific schedules, and that the clauses we have are there to take into account as is absolutely normal at the time the contract was signed these products did not yet exist.
Again, I'll cover the contract details below, but these statements are false.
Executive Vice President is Valdis Dombrovskis says that [companies applying for export authorization, must also reveal where the previous 3 months of shipments have gone:](https://youtu.be/KWsRwLRGX2c?t=862)
> I wanted to outline one provision from the regulation which foresees that companies applying for export authorization will also have to provide information on their exports and export destinations quantities and so on, for the period covering three months prior to entering into force of this regulation, so I think this will also help to shed a full light on export tendencies in recent weeks and months.
### 29th January early afternoon
EU invokes [Article 16 of the Northern Ireland Protocol](https://www.bbc.co.uk/news/uk-northern-ireland-55864442)
### 29th January about 11pm
EU commission [revokes invoking Article 16](https://www.independent.ie/news/covid-19-vaccine-chaos-as-eu-is-forced-into-u-turn-after-blocking-supplies-to-the-north-40028406.html).
> The source said the article may have been inadvertently triggered by “someone who did not understand the political implications” of the decision.
And then also pretends [it had never been invoked](https://davidallengreen.com/2021/01/what-is-article-16-of-the-northern-irish-protocol-and-what-on-earth-was-the-european-commission-thinking-includes-a-copy-of-the-now-deleted-proposed-regulation/).
## So, that was quite a series of events
While the events were unfolding, a lot of the actions and announcements by the EU commission seemed to make no sense to me.
When other people's behaviour doesn't make sense, I've come to learn that either they have different values to me, or they are making decisions based on a different set of 'facts' than I have.
The behaviour of the EU commission makes quite reasonable sense, if they believed that:
* producing huge quantities of a novel vaccine is a trivial endeavour.
* the contract with Astra Zeneca had different terms than it actually has.
* vaccines being produced in the plants located in the EU were shipping vaccine doses to the UK.
I hope I don't need to explain why mass producing a novel vaccine is difficult, and there has been no evidence of any vaccine doses from the AstraZeneca plants meant for EU production, but as promised a quick analysis of the contract details.
## Contract details
The redacted version of the EU AstraZeneca [contract is here](https://ec.europa.eu/commission/presscorner/api/files/attachment/867990/APA%20-%20AstraZeneca.pdf).
Although reading contracts is not particularly exciting, it's a skill worth practicing as it's normally not that difficult.
### Best reasonable effort versus hard committments
> Whereas, as part of that scale-up, AstraZeneca has committed to use its Best Reasonable Efforts to build capacity to manufacture 300 million does of the Vaccine...
The phrase "Best Reasonable Efforts" is repeated multiple times through the contract. At no point does the contract say that AstraZeneca must hit scheduled dates, or mention any penalties that would be incurred if they did miss dates.
### Locations
Section 5.1 covers where AstraZeneca was planning to produce the vaccine for the EU:
> AstraZeneca shall use its Best Reasonable Efforts to manufacture the Initial Europe Doses within the EU for distribution, and to deliver to Distribution hubs, following EU marketing authorization, as set forth more full in Section 7.1, approximately ...(number of doses redacted)
Section 5.4 does mention facilities in the UK, but only in terms of "if AstraZeneca can't meet it's production goals from the planned EU or UK plants, that AstraZeneca will help partner with other companies in the EU to manufacture the vaccine". It definitely does not say that AstraZeneca
## Questions that should be answered
Even assuming that the EU commission was making appropriate decisions but based on bad data, I have many questions:
### Is the EU commission aware of how difficult it is to scale up vaccine production?
This is the most disconcerting question for me.
The behaviour of the EU commission over the past few weeks is consistent with them believing that AstraZeneca chose not to honour the contract, rather than it being an incredibly difficult thing to achieve.
The EU has been talking about building up partnerships with large technology firms, but they seem to just not comprehend that some things are difficult.
### Why was the AstraZeneca announcement that production was behind schedule a surprise to the EU commission?
I would have expected that the EU commision would be working closely with all of the vaccine manufacturers to help resolve any issues as fast as possible.
I would have expected this to include scientists having daily meetings with AstraZeneca, so that any deviation from the planned schedule could be managed without too much surprise.
This obviously was not happening, and the slippage in schedule came as a shock to the EU commission, who then did not believe AstraZeneca was telling the truth.
### What information led the EU commission to believe that vaccines being made in the EU were being shipped to the UK?
Multiple statements by European politicians include those of Executive Vice President Valdis Dombrovskis indicate to me that they honestly believed that vaccines being produced in the EU by AstraZeneca had been being shipped to the UK for weeks.
Invoking Article 16 could have been an appropriate thing to do, if they believe that there were trucks full of vaccine on their way to the UK, but this appears to not be based in reality.
### Who is providing legal analysis of the contracts to the EU commission?
Multiple statements said by Commissioner Stella Kyriakides and Chief Spokesperson Eric Mamer regarding the contract that were false.
The way that the EU released the redacted version of the contract to back up their false claims makes be believe that they _honestly believed those falsehoods_.
It's understandable for individual people to make slips of the tongue regarding contracts. It is not understandable that senior members of the EU commission can make serious allegations of breach of contract against a company, and for no lawyer in the EU to double-check that they aren't dramatically incorrect.
### Was the appropriate process for invoking Article 16 followed?
Having someone who is apparently ignorant of the implications of international treaties be able to invoke hostile clauses of those treaties is...disconcerting.
In particular, according to one report I read (sorry lost the link) invoking Article 16 was supposed to be preceded by notifying multiple people, and that notification did not happen. Again, yikes.
## Summary
This series of events seems to indicate a failure of communication and facts at the highest level of the EU, leaving them making decisions on false premises.
Even worse, it almost makes Boris Johnson look good in comparison.text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydTasty things to cook
/blog/30/Tasty_things_to_cook
I commented to a friend that I cook food at home most days, with 'recipes' that are quick and easy to prepare, cook and cleanup afterwards.
These are notes on them rather than full recipes.
<!-- end_preview -->
## Eggs as either Omelette or Frittata
Fry 3 fillings chosen from say bacon, onion, mushroom, tomato, broccoli. Crack three eggs into a bowl and add a tiny splash of mil. When the things are finished frying:
* omelette, take them out of the pan, add more butter, add the eggs, put the stuff back into the eggs, distrbuted evenly across the surface. When the egg is firm add cheese to one half of the surface, and fold the omelette over.
* Frittata, add eggs to the pan. Stir it around to mix the fillings evenly in the egg. Add cheese to all of surface, place under grill broiler to finish.
Adam Ragusea has a [decent video](https://www.youtube.com/watch?v=-q8czZSQbbQ&list=PL4bffQE5cvkTK1JVHY9VFbKhKTpfMX78U&index=45).
## Soup
You chop vegetables up. You put them in a pan with water or stock. You boil them. You eat the soup. I like garlic in most of my soups. Again, Adam Ragusea has a good video on [vegetable soup]( https://www.youtube.com/watch?v=21ofoREnXbM).
### Spicy veg
Add lots of ginger and cilantro/coriander stalks. Save coriander leaves to add just before finishing.
### Hearty vegetables
Add mostly root vegetables e.g. parsnip, carrots, sweet potato. Cook them to well done, and use a stick blender to (carefully) puree them inside the pan. Serve with a good dollop of cream.
### Chicken soup
Onion, potato, leaks, sweet potato. Add chicken legs on bone, or just chicken thighs on bone. When the vegetables are nearly cooked, take the chicken out, let it cool for a few minutes, then remove meat from bone using your hands, and chop into small pieces, before returning meat to the pan. Vegetables can be blended before putting meat back.
## Spaghetti with sauteed stuff
Start cooking some spaghetti. Start frying some stuff. When the spaghetti is done, add it to the fried stuff, and add some cheese. Combos of fried stuff that are nice:
* bacon, red onion or leak, mushrooms, feta cheese.
* garlic, fresh chopped tomatoes, Philadelphia cream cheese.
## Bolognese sauce
Chop up some onion. Grate a couple of carrots with a box grater. Start frying some minced beef (it needs fat, don't use lean mince) in a large pan. If you like, add some minced pork, or lamb. Break up all the meat as it cooks into tiny pieces.
Optionally, when it's browned, add quite a bit of white wine. Add the onion and carrots, and about the same amount of canned chopped tomato as the meat.
Simmer on low for at least 3 hours. Add some herbs + spices at some point, I typically add bayleaf, a tiny amount of chilli power, [mace](https://www.schwartz.co.uk/discover/mace-ground), and lots of black pepper. Add Worcester sauce about half way through, and some miso paste near the end.
## Cheesy bacony spaghetti sauce
Chop up and brown (but don't crisp) some smoked bacon. Add a lot of finely chopped onion, and medium chopped celery and soften. Add a lot of garlic, then add chopped tomato. Simmer for 30 minutes. Add a couple of teaspoons of light brown (or white) sugar, to balance the tomatoes.
Turn heat off (or at least very low) and add a lot of chopped cheddar cheese, either medium, or mix of mild and mature. Stir every minute, until the cheese melts and the sauce becomes rich and gooey.
## Sauteed chicken
Chop some deboned chicken thigh into one inch pieces. Optionally marinate with Worcester sauce, soy sauce for an hour.
Heat a pan to quite hot, to brown the chicken on both sides quite quickly. Optionally, add some spices e.g. chinese five spice, sumac, ground white pepper. Add medium chopped onion to the pan, and a few table spoons of water, and cover with a lid, turn the pan down to medium. Cook for ten minutes, stirring occasionally. Adding the water steams the onions which cooks them far faster than frying. Remove the pan lid and turn heat back to high, to drive off the water. Sometimes I add cherry tomatoes and fry them just until they get a little soft. Serve with rice.
## Tomato rice
Boil some rice with cumin, add some sweetcorn right before it's done. While that's cooking fry some roughly chopped onions, add quite a bit of garlic, then add canned tomatoes. Add some tumeric, and black pepper. While that's cooking grill/broil some chunky white fish or fish-fingers. Chop up the fish. Add it and the rice to the onion + tomatoes. Mix it all up. Serve with parmesan cheese.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydGeneral food notes
/blog/29/General_food_notes
I commented to a friend that I cook food at home most days, with 'recipes' that are quick and easy to prepare, cook and cleanup afterwards. These are some notes on how things I consider when deciding on what to eat.
<!-- end_preview -->
Every meal should contain some vegetable. Even if it's just a fresh tomato cut up.
Don't eat 'reduced fat' stuff. If you're want to eat less fat, eat less stuff that contains fat.
Salt goes in liquid foods e.g. soups, sauces. Salt goes _on_ more solid food, e.g. scrambled eggs, omelette.
Sea salt is delicious. But you should buy a grinder to distribute it onto food.
Not every meal needs to contain meat. Not every meal needs to contain a carbohydrate. Sausages with a large salad is an awesome meal.
Salad needs a lot of salt. And a balsamic vinegar + oil dressing.
Dry cured bacon is the only type of bacon worth buying.
Topping food with a small amount of parmesan cheese is tastier, and healthier than topping with cheddar cheese.
Using spices to make your food tasty, is much healthier than dousing them with sauce.
If your food tastes as if it needs a little something, but you're not sure what, you either need to add more garlic or salt. Or maybe a splash of lemon juice.
Cooking and freezing soups, bolognese and curries are a great way to have tasty food for very little cost. And having food ready to pull from a freezer, is awesome for days when you don't feel like cooking.
Cooked rice and pasta can be frozen at home.
Butter is the best thing to fry most meat in.
Butter is the best thing to saute most vegetables in. Particularly carrots and broccoli.
Spices are great, but you have to experiment to find what you like. My vital list of spices are:
* sea salt
* black pepper
* whole cumin/jeera I buy this in 500g bags, and throw a whole handful in with rice as it's boiling.
* Worcester sauce.
* chinese five spice.
* ginger powder.
* sumac.
* Lemon juice.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydPHP 7.4 support lifecycle
/blog/27/PHP_74_support_lifecycle
There is one major inaccuracy in the article ['PHP showing its maturity in release 7.4'](https://lwn.net/SubscriberLink/818973/507f4b5e09ab9870/). In particular, the part "PHP is likely to continue with releases in the 7.x branch, adding incremental improvements,".
That is not the case.
<!-- end_preview -->
There is no 7.5 planned, and currently the support plans for PHP 7.4 are:
* Active Support until 28 Nov 2021
* Security Support until 28 Nov 2022
From now, that is:
* 1 year, 6 months
* 2 years, 6 months
respectively.
This is a relatively short time compared to other software languages.
The thing that balances this out imo, is that the PHP project also takes greater care than typically done to maintain backwards-compatbility. Although there will be some deliberate breaks (and some accidental ones) in the PHP 8 release, the vast majority of projects will be able to upgrade with much less effort than, for example, upgrading from ANSI C to C99, or definitely less work than Python 2.x to 3.x
Not only is the work less than compared to other languages, we also have some better tools for upgrades than other languages.
[Rector](https://getrector.org/) is a good example. It is an automated refactoring tool that understands PHP code, and has plugin based rules for how code should be refactored. When a PHP version is released, some Rector rules are written that:
* understands which bits of the code are subject to a BC break
* how to refactor those bits into equivalent code that works on the new version, or when possible into a version that works on both versions.
* gives an error for parts it can't refactor.
More info about Rector, [text form](https://www.tomasvotruba.com/blog/2018/02/19/rector-part-1-what-and-how/), [video form](https://www.youtube.com/watch?v=S6fg7sJfh20), [homepage](https://getrector.org/).
The PHP ecosystem has also seen a rapid improvement in the static code analyzers that are available to PHP users:
* [phpstan.org](https://phpstan.org/)
* [psalm.dev](https://psalm.dev/)
* [github.com/phan/phan](https://github.com/phan/phan)
* [github.com/Roave/BackwardCompatibilityCheck](https://github.com/Roave/BackwardCompatibilityCheck)
Sometimes healthy competition is better than projects working together. But I digress, back to the matter of the short time period of PHP 7.4 support.
In my opinion, the main reason for the short support lifetime, is the limited amount of people who are able and willing to work on maintaining PHP. This is not just a lack of volunteers, it's also a communications problem in scaling how many people can work on core PHP. The current communication methods aren't working that well for various reasons.
There is time between now and the planned end of support for PHP 7.4 for an alternative plan.
If any group could be found or formed now, separate to the current core maintainers, and could do the work to come up with a plan to maintain a LTS version of PHP 7.4, then there is plenty of time over the next 18 months to discuss and implement that plan.
That would be strongly preferable to having a drama filled conversation close to the end of support deadline. In that scenario, I suspect people might use emotionally charged language to try to pressure the current core maintainers into maintaining a version of PHP that they don't want to support.
To be clear, I don't have the bandwidth to be part of maintaining a LTS version of PHP 7.4, but I do have enough energy to drive the conversation forward. So that people who are interested in doing the work can find each other, I've opened an issue ['PHP 7.4 LTS'](https://github.com/Danack/RfcCodex/issues/17) with the aim of people leaving their contact details, or for me to link to interested parties.
As the release of PHP 8 is fast approaching, I won't raise this on list just now, but will wait until the PHP 8 feature freeze has occurred.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydCats survival strategy
/blog/26/Cats_survival_strategy
A lot of words have been written about the behaviour of cats and how living with humans for ten thousand years has affected that behaviour.
I have two 'pet' theories that seem kind of obvious to me, but I haven't seen elsewhere.
<!-- end_preview -->
## Behaviours selecting for like-ability
Humans today generally divide animals into two groups; animals we will eat if we think they're tasty, and those we think it's wrong to eat.
It seems reasonable to me that this strong differential between those two types of animals would have been smaller in the past.
Modern farming has provided humans with a food supply that is both more bountiful, and more reliable than the food supply over the past 10,000 years.
Before our food supply was reliable, there would regularly be periods of hunger and famine. During these periods humans would get very very hungry, and things that they wouldn't normally eat would suddenly start looking quite tasty.
People wouldn't do this for fun, but when the winter has been long, and their children are close to death due to malnutrition, the idea of converting a cat into some stew, and some baby sized fur clothes is going to be more appealing.
> He caught a mouse once and would not shut up about it. The whole family had to come and praise him.
People are, in general, not completely stupid.
Even if you're desperately hungry, you're not going to grab the nearest cat to kill. You're going to figure out which cats would be missed least and pick one of those.
A cat that has demonstrated how many mice it can kill is going to be viewed as a useful member of the community. A cat that just eats the mice it catches away from human view is not going to be viewed as useful, and so is far more likely to be selected for stew.
So yeah, we managed to train cats to bring us dead and dying vermin. Go humans!
## Behaviours selecting for fun.
One of the reasons that cats domesticated themselves is that for them, living in a human settlement is far safer than living wild.
But there are a couple of things that are still very dangerous for cats and have the potential to prevent the cats from passing on their genes.
### Aggressive dogs
Even today, some aggressive dogs will try to catch and kill cats. This is despite humans selecting for non-aggression in dogs since the time when we last used them for hunting.
### Vehicles
Ever since the invention of the wheel, cats have been killed in accidents where they have been struck by vehicles. This is selecting for cats that are scared of vehicles.
My dad has multiple cats, one of which is very smart and full of character. In the evening when my dad goes for a short walk to get some exercise, she will walk with him along the road. If a vehicle comes along the road, she will dart off to hide in a bush up a small embankment to completely avoid the risk of being hit by the car.
A similar thing is happening to the crows in rural France. They are becoming more wary of vehicles due to occasionally being struck by them. Near where my dad lives in Normandy, the roads are long and have little traffic. The crows will fly off when a vehicle gets to within 300 meters of them.
### Human against human violence
Aggressive dogs and vehicles are two big risks to cats individual survival that still occur, but another big risk to cats being able to pass on their genes has been humans killing other humans.
Due to the current lockdown, people are currently rediscovering that being stuck at home is kind of annoying. But being stuck in a home with the modern internet is not too bad.
It's hard to imagine how unpleasant it was for humans living through winter before we had developed water resistant clothing.
In Britain (where I live) the winters, while not harsh, can be quite long and depressing. Due to the position of Britain next to the north Atlantic, there are periods where the weather is quite bad, with rain and wind, for weeks at a time.
Until humans had invented and were materially well-off enough to have suitably warm and water resistant clothing, there would be periods of weeks (and sometimes even months) where going outside would cost a huge amount of calories, and so humans would be 'stuck' inside their homes.
Up until relatively recently, it was common for many families to live in one building.
Having people be stuck at home, for weeks at a time, with very low food supplies and not much to keep them busy is a recipe for people developing animosities to each other. And it's pretty easy to imagine those animosities to develop into acts of violence between people living in the same house.
Before modern medicine, it's possible that a single fight could end up killing enough adult humans in a tribe, to leave that tribe in a non-viable state, where all the surviving humans would decide to leave that place, to join another tribe.
Even if there were no human fatalities, having families fight against each other could end up with some the people living there suddenly leaving, would decrease the cats food supply through there being fewer mice being drawn to human activity.
For cats living in that home, either scenario would be a disaster.
For the whole colony of cats living there, it would be either the end of their line, or kill a large number of them before they could find residence in another human habitat.
Obviously there is no direct way for cats to stop violence between humans, however they can help prevent it happening in the first by providing some entertainment to humans.
Kittens are adorable, and adult cats will play with each other, or with humans if they have a piece of string. Even today, cats playing this provides a nice bit of stimulation and entertainment to humans despite all the other sources of entertainment we have available.
It's very easy to imagine that for many families and tribes, the entertainment provided would be the difference between violence breaking out or not.
For most of human-cat co-existance, any genes in cats that result in providing entertainment for humans would have had strong natural-selection benefits over a long period of time, which is why cats today are so good at entertaining humans.
The corresponding natural selection for humans who like cats and find them entertaining are probably also worth considering.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydWhy I help people with PHP RFCs
/blog/25/Why_I_help_people_with_PHP_RFCs
One thing I do to help the PHP project is help other people write RFCs. For those that don't know, RFCs are the way that the PHP language and core libraries are changed.
Some of the reasons I do this are:
## Enjoyable intellectual exercise
Sitting down and thinking through a problem deeply enough to be able to write convincing arguments about it is quite a rewarding exercise. And it is an _exercise_ for my brain.
Doing it helps me get better at analyzing problems and come up with sensible solutions for them, as well as getting better at being able to explain why a particular solution is sensible.
## Incremental progress is something I can do
On a project like PHP, there are two types of progress:
* big changes that dramatically improve the language.
* small changes that incrementally improve the language.
Most of the time, I don't have the energy to do the big changes. Also, I'm not that good at design, so probably shouldn't even if I could.
But the small changes are still useful things to do! Each small improvement makes life better for people who use PHP. For example the [RFC: get_class() disallow null parameter'](https://wiki.php.net/rfc/get_class_disallow_null_parameter) is literally a one character change in PHP.
<img src="/images/SmallestPossibleChange.png" width="100%"/>
That change isn't going to revolutionise the language, but it's going to save a many individual programmers from having to debug why the code is 'acting crazy' when they accidentally pass null to that function.
## Helps new people get started into the PHP project
One of the problems that the PHP project has is a lack of contributors.
Quite often people new to contributing to the project have an idea for an RFC that they would like to work on.
If their first experience on the project is running an RFC, there are two things that are likely to happen:
* they don't have the experience needed to write a convincing RFC.
* the are not going to enjoy the experience of having their RFC criticised.
This leads to people who would otherwise contribute being driven away from the project.
By helping people new to the project run a successful RFC, not only does it help the project in the short term, it also makes it more likely that those new contributors will stick around.
## Lowers the barrier of getting stuff done
Even for people who are not new to PHP internals, the skill needed to write RFCs is outside the normal skill set that most developers have.
This is an even greater problem for people who are non-native English speakers.
Having improvements to a project be within our reach, but not able to achieve them due to a lack of skill sharing would be one of the dumbest ways to fail.
## Makes internals discussions less painful
Although people writing emails to the internals email list are trying to help, a lot of the conversations are just not productive.
Drafting an RFC well enough that it makes a complete argument about a problem really cuts down on the amount of messages sent. That saves time of internal contributors (as each email sent takes time to read) and also makes it easier to see the more important emails.
This is also why I maintain the [RFC Codex](https://github.com/Danack/RfcCodex/blob/master/rfc_codex.md), a list of ideas that people have discussed on internals, that haven't come to fruition. It helps people pickup previous discussions, without having to email internals "hey why hasn't this been done yet?".
## Benefits me personally
I like using PHP, but the limitations it has do annoy me from time-to-time.
The RFCs I've helped with actually make my programming experience a little bit better.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydDocker rebuild errors, a note to my future self
/blog/19/Docker_rebuild_errors_a_note_to_my_future_self
A couple of times I've seen a weird error when rebuilding docker boxes.
The error looks like there is an error in the repository for where the packages are stored, with at least one of the packages missing.
<!-- end_preview -->
For example:
```
Step 2/3 : RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends php7.2-xdebug
---> Running in 3ba18b5a0627
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
php-xdebug
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 1074 kB of archives.
After this operation, 6381 kB of additional disk space will be used.
Err:1 https://packages.sury.org/php stretch/main amd64 php-xdebug amd64 2.6.0+2.5.5-1+0~20180205132619.2+stretch~1.gbpc24c95
404 Not Found
E: Failed to fetch https://packages.sury.org/php/pool/main/x/xdebug/php-xdebug_2.6.0+2.5.5-1+0~20180205132619.2+stretch~1.gbpc24c95_amd64.deb 404 Not Found
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
```
This type of error is possible when you have two separate docker containers where one container inherits from the other. I use this inheritance sparingly, but it is appropriate for defining a container for debugging than differs from the normal php_fpm container only by having Xdebug added.
The Dockerfile for the child blog_php_fpm_debug container that includes Xdebug looks like this:
```
FROM blog_php_fpm:latest
# TODO xdebug isn't currently stable with php 7.2
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends php7.2-xdebug
COPY xdebug.ini /etc/php/7.2/fpm/conf.d/20-xdebug.ini
```
The cause appears to be that Docker has rebuilt some of the containers, but not all of them, which has left them in a bad state. In particular:
* The base container blog_php_fpm was been built a while ago, and has old information about where to download Xdebug from packages.sury.org.
* Docker thinks the base container is up-to-date.
* Docker thinks the child container blog_php_fpm_debug needs to be rebuilt.
Because Docker creates the child blog_php_fpm_debug container based on an out-of-date parent blog_php_fpm it has out-of-date information about where to find the Xdebug extension.
The solution to this is to make Docker rebuild everything from scratch. You could probably accomplish this by blowing away all of the docker containers on a system, but you can do it more elegantly by doing:
```
docker-compose build --no-cache
```
By the way, if you have a CI system that supports running scheduled tasks ala cron, doing a `docker-compose build --no-cache` each day will help keep all of your containers up-to-data, which will both help stop this, and avoid long build times when updates do occur.text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydInterface segregation, the forgotten 'i' in SOLID
/blog/18/Interface_segregation_the_forgotten_i_in_SOLID
Some programming patterns I follow are slightly unorthodox, at least in the sense that they are not patterns followed by the majority of PHP programmers. One of these unorthodoxies is that I do not believe controller should be aware of HTTP Request objects.
Instead I use ['interface segregation'](https://en.wikipedia.org/wiki/Interface_segregation_principle) to make it so that the controllers can be decoupled from Request objects.
<!-- end_preview -->
The reason to do this are:
* It makes your controllers easier to reason about.
* It makes it easier to write tests for your controllers.
* It decouples your controller code from the concept of HTTP request/response, which makes it easier to re-use that code.
The example below hopefully shows wtf I mean.
## Example controller
Imagine you have a SearchController that allows users to search some sort of data source. First I want you to look at the signature of the controller, without the actual body of the method:
{% set code_to_highlight %}
class SearchController {
function search(DataSource $dataSource, Request $request) {
...
}
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
Q: Can you tell how the 'Request' object is being used in the controller method?
A: Nope.
To be able to understand how the request object is being used by the controller you need to inspect all of the lines of code inside the controller. This isn't a massive burden for a single controller, but:
* it makes it harder to write tests. To be able to create a mock/stub request object that can be used to test the various aspects of the controller, you need to hold that info in your mind.
* It couples the controller directly with the request object. Imagine we wanted to change this controller code to be a background task.
Ok, so now let's look at what the controller is actually using the request object for.
{% set code_to_highlight %}
class SearchController
{
function search(Request $request, DataSource $dataSource)
{
$queryParams = $request->getQueryParams();
if (!array_key_exists('searchTerms', $queryParams)) {
$message = "Parameter [searchTerms] is not available";
throw new ParamsMissingException($message);
}
$searchTerms = $queryParams['searchTerms'];
$searchOptions = [];
$searchOptions['keywords'] = explode(',', $searchTerms);
return $dataSource->searchForItems($searchOptions);
}
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
Ok, now you can see that the only thing that the controller is using the server request for, is to be able to pull variables from the query params of the request.
This means that of the approximtely 30 methods that are available on the Request object, only the `getQueryParams` method is being used.
The controller doesn't actually need access to all of the methods that are availble on the request class. So let's extract a simple interface that provides only the required functionality.
### Extracting an interface
We'll extract an interface called VariableMap and also refactor the controller code to use it.
{% set code_to_highlight %}
interface VariableMap
{
/**
* @throws ParamsMissingException
*/
public function getVariable(string $variableName) : string;
}
class SearchController
{
function search(VariableMap $variableMap, DataSource $dataSource)
{
$searchTerms = $variableMap->getVariable('searchTerms');
$searchOptions = [];
$searchOptions['keywords'] = explode(',', $searchTerms);
return $dataSource->searchForItems($searchOptions);
}
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
I hope you can agree that this is not a complex interface. In fact for a lot of interfaces that are 'extracted' from alrady written code, a large proportion of them will be single method interfaces like this one.
Ok, we now need to create an implementation that implements this interface. The first we'll create is the one that would be used in production to allow the `searchTerms` to be pulled out of the HTTP request.
{% set code_to_highlight %}
use Psr\\Http\\Message\\ServerRequestInterface;
class PSR7VariableMap implements VariableMap
{
/** @var ServerRequestInterface */
private $serverRequest;
public function __construct(ServerRequestInterface $serverRequest)
{
$this->serverRequest = $serverRequest;
}
public function getVariable(string $variableName) : string
{
$queryParams = $this->serverRequest->getQueryParams();
if (array_key_exists($variableName, $queryParams) === false) {
$message = "Parameter [$variableName] is not available";
throw new ParamMissingException($message);
}
return $queryParams[$variableName];
}
}
// If you are using Auryn or any other DIC system, you would
// need to alias the VariableMap to the specific implementation.
$injector->alias('VariableMap', 'PSR7VariableMap');
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
However for testing we can don't need to touch an actual request object.
Instead let's make a class that implements the VariableMap interface, but instead of reading the values from a complex object like Request, instead it will take an array of key-values as the sole construction parameter.
{% set code_to_highlight %}
class ArrayVariableMap implements VariableMap
{
public function __construct(array $variables)
{
$this->variables = $variables;
}
public function getVariable(string $variableName) : string
{
if (array_key_exists($variableName, $this->variables) === false) {
$message = "Parameter [$variableName] is not available";
throw new ParamMissingException($message);
}
return $this->variables[$variableName];
}
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
The ArrayVariableMap implementation makes it much easier to write unit tests for this controller.
Instead of having to create a mock object for the Request, and have to both understand __and__ remember which of the methods are being used by the controller, we can instead just use the ArrayVariableMap implementation directly.
## Testing with Auryn
Here is the code needed to test this controller before extracting and segregating the interface:
{% set code_to_highlight %}
/**
* Returns the keywords of the search terms as the results.
* @package Article\\InterfaceSegregation\\Step2
*/
class EchoDataSource implements DataSource
{
public function searchForItems(array $searchOptions)
{
return $searchOptions['keywords'];
}
}
class SearchControllerTest extends \\PHPUnit_Framework_TestCase
{
function testSearchControllerWorks()
{
$varMap = new ArrayVariableMap(['searchTerms' => 'foo,bar']);
$injector = createTestInjector(
[VariableMap::class => $varMap],
[DataSource::class => EchoDataSource::class]
);
$result = $injector->execute([SearchController::class, 'search']);
$this->assertEquals(['foo', 'bar'], $result);
}
function testSearchControllerException()
{
$varMap = new ArrayVariableMap([]);
$injector = createTestInjector(
[VariableMap::class => $varMap],
[DataSource::class => EchoDataSource::class]
);
$this->setExpectedException(ParamMissingException::class);
$injector->execute([SearchController::class, 'search']);
}
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
## Is doing this worth it?
One of the downsides of writing 'good' code is that it usually takes a bit more time to produce than just writing the first code that comes to mind.
So is refactoring the code worth it? In the sense that it provides enough benefit to be worth the extra code and thought needed?
#### Not tied to HTTP implementation
Because the controller is now decoupled from the Request object, the controller is no longer coupled to the specific
#### Not tied to HTTP at all
Actually, the controller is now completely decoupled from knowing about the HTTP layer at all. We could use it in a program that is running from the CLI as, without having to serialize/deserialize the Request object.
#### Testing is easier
The tests have become easier to write. Instead of having to understand which methods of the Request object are used by the controller.
#### Testing is quicker
Because the 'stub' implementations used for testing are smaller (in lines of code) than the size of the Request implementations, the tests for the controller will be quicker to run. Although the difference for just a couple of tests will be trivial, when you have a large project and the number of unit tests for the whole project starts to reach thousands, or tens of thousands, saving even just a few milliseconds per test, means that the overall time spent waiting for your tests to run is signigicantly reduced.
## Summary
Although creating interfaces increases the number of components in your code base, the benefit is that your code is easier to reason about, and easier to test.
These two things are vital in any non-trivial project, as it is easy for code to grow to be more complex than you can easily understand. Or as [Edsger W. Dijkstra](https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340.html) put it:
> It has been suggested that there is some kind of law of nature telling us that
> the amount of intellectual effort needed grows with the square of program length...
> “The purpose of abstracting is not to be vague, but to create a new semantic level
> in which one can be absolutely precise. The intellectual effort needed to ... understand
> a program need not grow more than proportional to program length.” - Edsger W. Dijkstra
Avoiding the trap of writing code that is too complex to understand should be a priority for programmers, and interface segregation is one tool that can help with that.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydPSR-7 and airing of grievances
/blog/17/PSR7_and_airing_of_grievances
I want to get something off my chest; I am not a big fan of [PSR-7](http://www.php-fig.org/psr/psr-7/), the PSR about 'HTTP message interfaces'.
Although I do use it, I do so in a way where my code is barely aware of it at all, and so I can swap to using a different representation very easily.
<!-- end_preview -->
Rather than just saying *ewww, I don't like it*, I ought to be clear about why it's something that I use reluctantly.
### Normal web requests
For normal HTTP requests that come in with a complete body attached to the request, PSR-7 works okay actually.
There are, in my opinion, a few rough spots in the API where either the functions are not fantastically named, or the return values (particular of `ServerRequestInterface::getParsedBody()`) are not particularly clear.
But apart from those minor quibbles though, PSR-7 solves the problem of how to represent an incoming HTTP request pretty well.
### Streaming/incomplete requests
The real problems with the PSR-7 start with how it tries to abstract away the difference between 'complete' HTTP requests, where the whole body is available with the request, and 'incomplete' HTTP requests where the body isn't available. 'Incomplete' requests are used in a couple of different places, for example when dealing with large file uploads, you don't want a 100MB file to be loaded into memory by PHP.
I've written before about how I am [really not a big fan](http://blog.basereality.com/blog/15/Stop_trying_to_force_interfaces) of forcing a common interface to two different concepts.
It would have been far better if the 'streaming' part of the interface had been removed, into a separate interface. That would avoid any confusion about whether a request is of the 'complete' type, or whether it represents an 'incomplete' request.
It also would have made the implementation for the 'complete' requests be simpler. This apparently is the code you need to write if you want to create a body for a mock response:
```
$body = new Stream('php://temp', 'wb+');
$body->write(\"Hello world\");
$body->rewind();
```
which is a bit more complex than it ought to be:
```
new TextBody(\"Hello world\");
```
That's not the biggest deal in the world....it's just a little bit less good than it could be.
### Response implementation
The response implementation.....oh, the response implementation.
I certainly see that it makes it easier to write plugins for frameworks and for 'middlewares' if there is a 'Response' object passed around. However this is using a global mutable object to hold information. That is just not good programming practice. In this case it makes it hard to reason about which bits of an application are going to be modifying the response object....which makes it hard to write reliable code.
But the more fundamental problem is that having a response object is a bad abstraction as the response doesn't actually exist on the server. Certainly a response 'body' can exist before the response is sent, and additionally some headers that should be sent with the response can exist. But the actual response _does not exist until you start sending bytes to the client_.
The complete response only exists after the sending of the bytes has finished and the connection has been closed. This means that trying to represent the response as an object before it is sent is an inherently bad abstraction.
Bad abstractions can be useful (and I can see people will find the response object in PSR-7 useful) but they seem to be a poor trade-off between how easy it is to write code, and how easy it is to test that code, and also reason about what the code is doing in an application.
## Y U NO MAKE IMPLEMENTATION?!
By tradition, the PHP-FIG group likes to promote interfaces as the way of advancing interoperability in the PHP ecosystem. However for a large standard like PSR-7 where there are likely to be edge-cases in the implementation, it would have been a better choice for it to be released as a standard implementation.
The benefit that would bring is that all code interacting with the PSR-7 standard could rely on the standard behaviour in any edge-cases that may exist. Instead people will need to test their code against the particular implementation being used.
In the future if and when people decide that the edge-case behaviour needs to be changed, a new PSR-7.1 implementation could be released, and people could switch en masse to that new standard implementation, rather than the change having to be made in each separate implementation.
## So.....?
Realistically, the fact that PSR 7 isn't perfect isn't going to stop people from being able to write applications.
It just means that frameworks and other code that is written, particularly the wave of PHP 'middleware' that people are making, aren't going to be quite as good as they could be.
You can protect yourself from this by using interface segregation to separate your code from the PSR-7 implementation, so that if and when you realise you need a better representation of HTTP requests, it is easy to migrate to it, without needing to rewrite a significant portion of your application. But that will have to be a separate blog post.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydVariadics and dependency injection
/blog/16/Variadics_and_dependency_injection
PHP has supported variadics since version 5.6. Recently the question of how to support them in dependency injection containers such as [Auryn](https://github.com/rdlowrey/auryn) was raised; should they be supported in a way similar to how Auryn handles scalar parameters?
Although supporting the ability to inject parameters by name is a useful thing to do (even if it is a bit of a hack), supporting injecting variadic parameters is a different kettle of fish entirely. One that would, in my opinion, be a bad choice.
<!-- end_preview -->
## Why Auryn supports injecting params by name at all
Auryn supports injecting parameters by name because PHP is missing a feature; the ability to have strongly typed scalars. If that feature was added, something similar to the code below would be possible:
{% set code_to_highlight %}
class DatabaseUsername extends string {}
function foo(string $username) {
// ...
}
function connectToDB(DatabaseUsername $dbUsername) {
foo($username);
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
However we don't have that feature in PHP. Because of that if we want to create strong scalar types like that, we have to do more typing on the keyboard:
{% set code_to_highlight %}
class DatabaseUsername {
private $value;
public function __construct($value) {
$this->value = $value;
}
public function getValue() {
return $this->value;
}
}
function foo(string $username) {
// ...
}
function connectToDB(DatabaseUsername $dbUsername) {
foo($username->getValue());
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
This is not an immense burden, but it is one that is quite annoying to have to do continuously. It's particularly annoying when you're upgrading a legacy code base to use Auryn, and you have a large number of scalar values that need to be encapsulated in types.
To make it less of a burden Auryn supports defining parameters through defining paramters by name:
{% set code_to_highlight %}
$injector->defineParam('dbUsername', getEnv('dbUsername'))
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
Or by to define multiple parameters for a single class:
{% set code_to_highlight %}
$injector->define($classname, [...])
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
## Variadics are a different problem
Here is some example code where one of the parameters is variadic:
{% set code_to_highlight %}
class Foo {
public function __construct(Repository ...$repositories) {
// ...
}
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
In this code `$repositories` does not represent a single scalar variable and so Auryn's ability to make life a bit easier for the programmer by being able to inject scalar values by name does not apply.
Instead '...$repositories' represents a complex type. Resolving what needs to be injected for this parameter is a more difficult problem than just making parameters be injectable by name, and so will need a more advanced technique to solve.
Two possible solutions are to either use 'delegate methods' to setup the depedency injection or to encapsulate the `...$repositories` inside a 'context'.
## Use a delegate method
The simplest way to be able to create an object that depends on a variadic parameter is to use a delegate function to create the variable dependency.
{% set code_to_highlight %}
function createFoo(RepositoryLocator $repoLocator)
{
//Or whatever code is needed to find the repos.
$repositories = $repoLocator->getRepos('Foo');
return new Foo($repositories);
}
// Instruct the injector to call the function 'createFoo'
// whenever an object of type 'Foo' needs to be created
$injector->delegate('Foo', 'createFoo')
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
This achieves the aim of being able to create an object that has a variadic dependency, however it has some downsides. One particular problem is it can only be used for constructor injection; it cannot be used to inject the dependencies as a parameter in normal functions or methods.
## Refactor the code to use a context
Using a 'context' doesn't have these downsides. Or to give it the full name, using the ['Encapsulated context patten'](http://www.allankelly.net/static/patterns/encapsulatecontext.pdf)
The trade-off is that it would require us to refactor your code a little bit. This is a *good* trade-off to make, in this case. In fact it's a fantastic trade-off. It makes the code far easier to reason about.
{% set code_to_highlight %}
// This is the context that holds the 'repositories'
class FooRepositories {
private $repositories;
private function __construct(RepositoryLocator $repoLocator)
{
//Or whatever code is needed to find the repos.
$repositories = $repoLocator->getRepos('Foo');
}
}
class Foo {
// Change the dependency to be on the context
public function __construct(FooRepositories $fooRepositories) {
// ...
}
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
There are a couple of reasons why using a context is a superior solution:
* The dependency is now a named type, which means that you can see how it is used in your codebase without having to know which class it is used in.
* If some other code has a dependency on a separate set of repositories, you can create a separate context for that code. It is far easier to understand that `FooRepositories` is separate from `BarRepositories` compared to trying to reason about `...$repositories` used in one place, and `...$repositories` used in a separate place.
* If you're using a framework such as [Tier](https://github.com/danack/tier) that allows multiple levels of execution then it's much easier to create specific context types and pass them around as dependencies rather creating a generically named variadic parameter and hoping for the best when passing that around.
## Summary
In my opinion Auryn, or any other dependency injection container, shouldn't handle variadics at all. They aren't a type and so can't be reasoned about by a DIC.
People should use either delegation or contexts to achieve being able to inject variable dependencies such as variadics, as those are 'easy to reason about' ways of achieving that goal.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydStop trying to force interfaces
/blog/15/Stop_trying_to_force_interfaces
A common anti-pattern (aka mistake) people make when trying to write re-usable code, is to try and force either an interface or abstract class where it isn't appropriate. The question below is paraphrased from a [Stackoverflow question](http://stackoverflow.com/q/33605630/778719)
<!-- end_preview -->
### The question
<i>
I have some 'market place' classes that interact with the APIs of marketplaces like Ebay, Amazon etc.
I want to make these classes define some mandatory functions such as createProduct, updateProduct, getCategories, getOrders. Each of these marketplaces requires different format of data and so the functions require different types and numbers of parameters. For example:
</i>
```
// Interface for Ebay
interface Marketplace {
public function createProduct($product, $multi);
}
// Interface for Amazon
interface Marketplace {
public function createProduct(array $products, $multi, $variations);
}
```
<i>
In such case I cannot implement a single 'Marketplace' interface as each of the implementations is different. How should I create an interface for these classes?
</i>
### Analysis of the problem
The original poster's code is trying to put both of these things into a single interface:
```
interface Marketplace {
public function createProduct($product, $multi);
// or
public function createProduct(array $products, $multi, $variations);
}
```
The fundamental problem is that these are just inherently incompatible interfaces.
Each 'market place' has different capabilities (e.g. Amazon allows 'multiple' purchase) and so the methods for creating products need to have different parameters.
Although you could theoretically come up with an abstracted interface, that would abstract away all of the details, that would almost certainly be horrible as the abstraction would just be incorrect and misleadingly so for at least one of those objects.
What we need to recognise is that this is a case where the application is building up information
during it's operation. In particular, the functions that are required to do the uploading cannot
be determined until some other code is run. The OP is trying to solve this problem by making the two
different sets of function have the same interface, so that it is irrelevant which is chosen to
be run during the operation of the application, as either set of functionality can be swapped in
for another.
The much better solution is just to acknowledge that this is an application that needs to build up
information of what needs to be executed _INTERNALLY_ to the program's execution, and to work with a
'framework' that has the capability to handle this build up of information, such as [Tier](https://github.com/danack/tier).
### How to refactor the code - 1st stage
We should acknowledge that the interface isn't common and should be done by two separate
functions. Lets imagine that the use-case is that someone has a webpage where they can upload
some images and text, with prices and then select to upload that to either Amazon or Ebay or both.
So the first part of the program is to figure out which uploaders need calling, and the uploading part
is separate:
```
// Upload a list of product to Amazon
function uploadAmazonProducts(AmazonClient $ac, ProductList $productList) {
...
}
// Upload a list of products to Ebay
function uploadEbayProducts(EbayClient $ec, ProductList $productList) {
...
}
// Figure out which uploaders need to be run.
function determineUploaders(UserInput $userInput) {
$uploaderList = [];
if ($userInput->isAmazonSelected() == true) {
$uploaderList[] = 'uploadAmazonProducts'
}
if ($userInput->isEbaySelected() == true) {
$uploaderList[] = 'uploadEbayProducts'
}
return $uploaderList;
}
// The product list would need to come from the users input.
$injector->delegate(ProductList::class, 'createProductListFromUserInput');
// Get the list of uploaders to run.
$uploaderList = $injector->execute('determineUploaders');
// Run each of them.
foreach ($uploaderList as $uploader) {
// We execute each of the uploaders as appropriate. The first stage of
// 'determineUploaders' has no knowledge of what the 'uploader' callables
// require, only their name. The injector does all the work to provide
// the dependencies for each uploader.
$injector->execute($uploader);
}
```
### How to refactor the code - 2nd stage
So separating the functions that uploaded to Amazon/Ebay is nice...but I would strongly suspect that the 'ProductList' is also a bad abstraction. There are probably features for product lists that are possible to do on Amazon, that are not possible to do on Ebay and vice-versa. So again, using a common abstraction between the two leads to at least one of the abstraction being either misleading or flat out wrong.
So let us separate the two of them as well, using the delegate functionality. These factory functions are not abstract at all, they create ProductLists specific to each retailer.
```
function createAmazonProductListFromUserInput(UserInput $ui) : AmazonProductList {
return AmazonProductList::fromUserInput($ui);
}
function createEbayroductListFromUserInput(UserInput $ui) : EbayProductList {
return EbayProductList::fromUserInput($ui);
}
// Now set the uploading functions to have dependencies on their specific ProductList
// The Amazon uploader depends on an AmazonProductList
function uploadAmazonProducts(
AmazonClient $ac,
AmazonProductList $productList
) {...}
// The Ebay uploader depends on an EbayProductList
function uploadEbayProducts(
EbayClient $ec,
EbayProductList $productList
) {...}
// Tell the injector how to create each of those specific product lists.
$injector->delegate('AmazonProductList', 'createAmazonProductListFromUserInput');
$injector->delegate('EbayProductList', 'createEbayProductListFromUserInput');
// This code is the same as before.
function determineUploaders(UserInput $userInput) {
$uploaderList = [];
if ($userInput->isAmazonSelected() == true) {
$uploaderList[] = 'uploadAmazonProducts'
}
if ($userInput->isEbaySelected() == true) {
$uploaderList[] = 'uploadEbayProducts'
}
return $uploaderList;
}
// The first part of the program execution determines what needs to be run.
$uploaderList = $injector->execute('determineUploaders');
// The second part of the program ex
foreach ($uploaderList as $uploader) {
$injector->execute($uploader);
}
```
Yay! We have perfectly understandable code, without any need for abstractions!
Don't get me wrong - abstractions are lovely when they are correct. But when they are inherently the wrong solution to a problem, you shouldn't force yourself to use them..
Note - the solution above is completely analyzable by static code analysis tools, which is great, as static analysis can help prevent a whole class of errors in programs. For example, if any call to a function is missing a parameter, the static analyzer would detect that.
This solution requires that you use a framework that allows you to run multiple pieces of code through the DIC. That might be a bit difficult with traditional frameworks like Symfony or Zend. With the {linkTier() | nofilter} framework it is trivial to run the separates bit of code, with the DIC able to inject the dependencies for each of them.
### Bad solutions to this problem
I find myself forced to comment on the accepted answer for that question.
<i>
A usual way to resolve this kind of problem is to define an optionsResolver which is passed to each one of your class in order to be initialized...To see a full example of implementation, <a href=\"http://symfony.com/doc/current/components/options_resolver.html\" target='_blank'>take a look at Symfony2.</a>
</i>
This is known as [Yo' Dawgging](http://www.urbandictionary.com/define.php?term=Yo+Dawg) as in \"<i>Yo dawg, I heard yo like coding so we put a language in yo language so yo can code while yo code.\"</i>
Instead of solving this with just code, that solution is solving it with 'code that runs code' i.e. creating a meta-level programming solution. This means that simple static code analysis tools cannot be run on the code, as it's not possible to detect whether all the required options for a 'marketplace' object will be set.
Worse than this that, it just makes the code really hard to think about. Without going to look at the internal lines of code that make up the resolver function:
```
protected function setResolver(OptionsResolver $resolver)
{
$resolver
->setRequired(array('product', 'multi'))
->setAllowedTypes('product', 'string')
->setAllowedTypes('multi', 'boolean')
->setDefaults(array('multi' => true));
}
```
it is impossible to understand what parameters need to be set. Making code be hard to reason about like this is a very bad trade-off.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydArguing on the internet is a waste of time
/blog/14/Arguing_on_the_internet_is_a_waste_of_time
Some people I have interacted with online have left me feeling...unhappy. It's not that they were trolling (particularly hard) or that they were seeking to antagonise me. Some of them are even trying to help me! by suggesting ways I can do things better! I wasn't necessarily asking for that advice, but they weren't deliberately trying to make me feel bad.
<!-- end_preview -->
What I've noticed from those unhappy conversations is a pattern. For most discussions on certain topics, and almost all discussions I have with some people, every conversation turns out to be a combination of the following:
* never ending unless I completely retreat on my opinions.
* take up a lot of time as I try to justify why I disagree, as other people refuse to accept that I might have valid reasons for thinking differently from them.
* frustrating as the 'discussion' degenerates into barely disguised name-calling.
I think this problem stems from two causes:
#### People are getting really good at arguing on the internet
People who take part in online discussions have had a long time to practice. Even for people who discovered the internet later than most people, there has been a decade for them to practice their arguing skills.
And unfortunately, the main skill people (including myself) have developed is not how to make cogent and clear points, but instead how to avoid conceding anything in a discussion. This is done either by bringing up more and more unrelated points, or by claiming things as facts when actually they're just opinions, or other various tactics.
#### People don't realise it's okay to disagree
Just in general, it's okay for people to disagree about stuff.
But it's particularly true when people are making decisions based on their own set of priorities. When two people value some concerns differently to each other, those two people can both be making completely logical arguments, and yet still come to different conclusions. In other words people can disagree purely because the logical arguments are based off different values, not because either person's argument is flawed.
A good example of this is the (inane) discussion that surrounded whether the version of PHP to be released after PHP 5.6 should be named either [PHP 6 or PHP 7](https://wiki.php.net/rfc/php6).
The reason that there was any discussion at all is that PHP 6 had been in development for a long time, but had been cancelled due to technical hurdles. The very short version of the two sides of the discussion are:
* People whose priority was having consistent semver versioning, and so thought the version name should be 1 more than the last major version number, i.e. be called PHP 6.
* People whose priority was not having any confusion around the new and good version of PHP, with the old cancelled version of PHP, by calling it PHP 7 to make it clear it is a different release. Additionally people who had worked on PHP 6 valued not being reminded of the failure of the PHP 6 project.
Both of these sets of priorities are fine.
You can't say that people are 'wrong' to value semver versioning less than they value clearly separating the new version of PHP from the failed old version.
You might try to persuade them to change their values if you think that they haven't fully considered the implication of not following semver. And you can certainly say that you don't agree with how people have prioritised those two sides.
But at a fundamental level you can't tell people that they're wrong just because people have different priorities than you do[^playfully_antagonistic].
_Having different priorities is not a problem._
It's fine for people to value things differently. In programming, as in life, how people value things comes from the experiences they have had, and from the challenges they have faced. As everyone leads different lives and has had different experiences it is natural for people to value things differently.
What is a problem is that a significant number of people just don't accept that other people should be allowed to have different priorities, and so they seek to argue them into having the 'correct' set of priorities.
#### And my point is?
I want to be more productive and ship more code.
I'm going to achieve this, at least in part, by actively bowing out of conversations that I think are unlikely to be productive.
If I do this with you, it's not because I think you're a horrible person[^horrible_person] and never want to talk to you again, it's just that I want to get on and be productive.
This isn't going to be a judgement on whether you're right or wrong in a particular discussion, or whether we are just disagreeing because we value things differently.
It's solely because I need to get on and get shit done.
[^playfully_antagonistic]: Without being a massive twat.
[^horrible_person]: I'm not saying you're not a horrible person either. People's character seems to be a mostly orthogonal fact to whether having discussions with them is a productive use of either person's time.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydTime is not fungible
/blog/13/Time_is_not_fungible
The below is a repost of a Slashdot [comment by JustinOpinion](http://ask.slashdot.org/comments.pl?sid=1932550&cid=34743614) that I thought was too great to not repost. It's in response to the question, "How do you prove software testing saves money".
<!-- end_preview -->
<hr/>
There is a generally-held belief among coders that "doing it right, the first time" and "rewriting this mess" will save money in the long-term, and that managers are idiots for not seeing that. This can, of course, be true. But it isn't always true, and coders are sometimes projecting their OCD-desire to have nice code (and sometimes suffering from "I didn't write it so it must be crap" syndrome) and assuming that this will translate into the dollars and cents that the company cares about.
Sometimes it's worth it; sometimes it's really not.
The thing about money is that it is both non-linear (double the money doesn't necessarily have double the value; sometimes it has more than double because you can overcome barriers; sometimes it has less because of diminishing returns, etc.) and temporally varying (inflation, time-value-of-money, etc.). Because of this, it can actually make economic sense to do something in a half-baked way, and "pay the price" later on (in terms of higher support costs, or even having to totally re-do a task/project). For example, in cases where you "absolutely need it now" (the value of having it finished soon becomes larger than down-the-road problems) or because you can't spare the cash right now (the value of using that money to do something else right now is larger than the down-the-road problems). (If you want a physics analogy, notice that money is not a conservative force-field: it is a path-dependent process...)
I'm not saying that it always makes sense to do slipshod work now and suffer the consequences later. There are plenty of dumb managers who over-value short-term gains compared to long-term. But that doesn't mean that the optimal solution is to spend massive effort up-front; there is such a thing as being too much of a perfectionist. And, importantly, the right answer will vary wildly depending on circumstances and the current state of the business. A startup may need a product to show (anything!) in order to secure more money. Doing it "right" will mean bankruptcy, which is far worse than having to keep fixing and maintaining a piece of shoddy code for years to come. A very well-established company, on the other hand, may do serious damage to their reputation if they release something buggy; and can probably afford to delay a release.
Actually figuring out the cost/benefit is not simple. In principle this is what good managers and good accountants are there to do: to figure out how best to allocate the finite resources. If you think you've found a way to reduce costs by implementing testing, then by all means show them the data that supports your case. However don't assume that just because testing will make the product better, that it actually makes sense from a business perspective.
<hr/>
This is one of the reasons why code written for businesses is horrible. It's not that the programmers are totally incompetent, it's just that a lot of the time there is no business case for making the code be as good as the programmers would like.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydApple don't need fan boys
/blog/12/Apple_dont_need_fan_boys
As much as I like arguing on the Internet, one thing that irritates me is when people repeatedly claim facts with no basis in reality about why Apple has 'special' advantages sellings it's products that other companies don't have.
<!-- end_preview -->
These false claims are:
## Apple have devoted fans who buy Apple products
The first claim is that Apple have (and have always had) a vast pool of fanboys; people who like Apple products mostly because the fanboys think it is fashionable to like Apple products. It is only because of this pool of fanboys that the iPod and then iPhone became so popular.
This is pretty clearly false. The vast majority of people who have bought iPhones either:
* use Windows PCs
or
* don't own either a Mac or Windows PC - i.e. they're teenagers for whom the iPhone is first internet capable device they've owned.
This can be pretty clearly demonstrated by the number of iPhones sold when compared to the number of Mac PCs that people own. The install base of Mac compputers was [22 million](http://appleinsider.com/articles/07/03/02/mac_install_base_estimated_at_22_million_pre_leopard) just before the iPhone launched. Seeing as 250 million
## iPhone was the first smart phone to the market and so has first mover advantage.
That's not true - Nokia and SonyEricsson had many smart phones which are functionally equivalent to an iPhone (i.e. name something the iPhone 1 can do that the Nokia smart phones released at that time couldn't do).
In the period when these phones were released from early 2000s to when the iPhone went on sale in 2007, they sold in relatively large but not huge numbers.
Even though these phones were out for many years before the iPhone, they didn't sell in huge numbers because they were absolutely _shite_. As soon as someone who had a Nokia/SE smart phone tried an iPhone,
## All their products look good and that's why people buy them.
That _is_ a legitimate observation about Apple but it's not just looks; it's the whole design of the user experience.
Other companies are free to spend the money on design required to make their products as shiny as Apple products look. What amazes me is that other companies don't seem to be able to match this user experience of the product even though they've got the iPhone buying and using experience in front of them to copy.
## Apple spends more on marketing than other companies
Nope. Although Apple does have highly noticeable adverts which are also pretty memorable* (which helps with the perception that they have more marketing than advertising than other companies), other companies are the ones that are spending huge amounts of cash to promote their lack-lustre products.
Yes, I'm particularly referring to both Microsoft which [has just spent $900 million for their Windows 8 Surface tablets](http://www.forbes.com/sites/ycharts/2012/08/02/who-spends-more-on-ads-apple-or-microsoft-another-lesson-in-quality-vs-quantity/) which sold a whopping 2 million units in the first year, and Samsung who spend vastly more [on advertising than Apple](http://www.asymco.com/2012/11/29/the-cost-of-selling-galaxies/).
## You're allowed your own opinions, not your own facts
As I said, I do appreciate having a good ---argument--- discussion about why Apple is so succesful with people on the internet. However one requirement for it to be an 'discussion' is that the set of facts has to be roughly agreed upon.
If the set of basic facts can't be agreed upon, then there's no point trying to have a discussion. Either you or the person you're trying to have the discussion with (or both of you) are perceiving reality incorrectly, and there's no way that you'll be able to persuade the other with a rational discussion.
* The current set of Apple ads are noticeable for two things:
i) The audio is really, really quiet, leading to a very calming effect, which is _highly_ unusual in adverts, and in clear contrast to [Microsoft's](http://www.youtube.com/watch?v=tGvHNNOLnCk) [adverts](http://www.youtube.com/watch?v=U7UlE-o8DQQ) which always seek to be as energetic as physiologically possible.
ii) They show people using the product, with almost no other explanation about the product, which ties back into Apple products being so functionally obvious and easy to use, that people can understand what they do just from simple visual demos.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydComplete Nginx config for PHP
/blog/11/Complete_Nginx_config_for_PHP
Nginx is awesome. PHP-FPM is awesome. Nginx with PHP-FPM is really awesome. However the documentation for how to link the two of them up through the config files is not particularly awesome.
<!-- end_preview -->
Below is the set of config files used to configure nginx and a website. The only things that are noticeable the config are:
* It's for a tiny website that runs on an Amazon micro instance. For a real website, you would need many more workers.
* the part in the Nginx config for the site that does the `try_files`.
```
set $originalURI $uri;
try_files $uri /routing.php /50x_static.html;
fastcgi_param QUERY_STRING q=$originalURI&$query_string;
```
Almost every other example of how to configure PHP-FPM with Nginx has the Front Controller php file as the last parameter for the try_files line. That is fine, except that hitting the last parameter of try_files causes Nginx to rewrite the request to that last parameter and reprocess the request through all of the Nginx config.
Although that works, it seems a bit silly.
By just saving the original URI and passing that into the QUERY_STRING for PHP-FPM, the additional processing loop is skipped.
### Base Nginx config file that is used by all sites.
{{syntaxHighlighterFile('example_nginx.conf', 'js')}}
### Base PHP-FPM config file that is used by all sites.
{syntaxHighlighterFile lang='js' file='example_php-fpm.conf'}
{/syntaxHighlighterFile}
### Site Nginx config that routes requests to either static files, or the front controller.
{syntaxHighlighterFile lang='js' file='example_site.nginx.conf'}
{/syntaxHighlighterFile}
### PHP-FPM config for a site, which creates pools and workers.
{syntaxHighlighterFile lang='js' file='example_site.php-fpm.conf'}
{/syntaxHighlighterFile}
### FastCGI config, to avoid repetition in the above fle.
{syntaxHighlighter lang='js'}
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#QUERY_STRING is not set in here - set it in Nginx to prevent extra redirect
#fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
{/syntaxHighlighter}
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydIncluding functional library code in PHP
/blog/8/Including_functional_library_code_in_PHP
PHP has a pretty good system for including class based code that uses the class namespace and name to define the possible file names that should be searched to auto(matically)-load the class. However PHP has a really crappy system for including functional code, the [require/include](http://php.net/manual/en/function.require-once.php) system.
<!-- end_preview -->
It's crappy because:
- It always causes a file hit. Neither APC or OPCache can intercept it's calls and so every require causes a hit to the file system.
- It requires hard coding of paths to find the exact file.
- The syntax just sucks*.
There is an alternative solution to including functional code in PHP (which also sucks just not quite as much).
1. Define a class in a namespace for the file you want to be able to include.
2. Give that class a static function that does nothing.
3. Put the functional code into a 2nd namespace block, which is set to the global namespace.
3. Call the classes static function to require the functional code.
An example is a library file for adding extra multi-byte strings which haven't been implemented in core PHP:
{% set code_to_highlight %}
namespace Intahwebz\\MBExtra{
class Functions{
public static function load(){
}
}
}
namespace {
function mb_ucwords($string){
return mb_convert_case($string, MB_CASE_TITLE);
}
function mb_lcfirst($str) {
return mb_strtolower(mb_substr($str,0,1)).mb_substr($str,1);
}
function mb_strcasecmp($str1, $str2, $encoding = null) {
if (null === $encoding) {
$encoding = mb_internal_encoding();
}
return strcmp(mb_strtoupper($str1, $encoding), mb_strtoupper($str2, $encoding));
}
//And all the other functions are available at https://github.com/Danack/mb_extra
}
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
It is now possible to pull the functions defined in the library in to any other file by calling:
```
\\Intahwebz\\MBExtra\\Functions::load();
```
Which will load the PHP file that defines that class into the current process, and all the functions defined in the global namespace will be available. Don't get me wrong, *this still sucks*, but it does have the advantages that:
- The syntax for including functional code is now the same as including class based code, so there's one less thing to think about.
- Composer (or any other autoloader you might use) will fixup all the dependencies for the file paths.
- Because you're just calling a class function both APC and OPCache are able to cache the file avoiding both the file system access and compiling the code again.
Until someone suggests a better way of being able to manage functional code dependencies between projects, I'll be using the above hack to manage those depdendencies.
## * Example of the syntax sucking
I have a library of extra multi-byte versions of string functions which gets installed by Composer into:
$PROJECT_DIR/vendor/intahwebz/mb_extra/src/Intahwebz/MBExtra/Functions.php
which I then want to include in another library file which Composer installs in:
$PROJECT_DIR/vendor/danack/PHPTemplate/src/Intahwebz/PHPTemplate/Converter/TemplateParser.php
The require line I would need to use the mbextra library file in the PHPTemplate library would be:
```require_once __DIR__."/../../../../../../intahwebz/mb_extra/src/Intahwebz/MBExtra/Functions.php"```
which just *suuuuuuucks*.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydGo home PHP; you're drunk
/blog/7/Go_home_PHP_youre_drunk
So this works.
<!-- end_preview -->
{% set code_to_highlight %}
const ✓ = true;
const ✕ = false;
function ≠($left, $right) {
return $left != $right;
}
function ≅($left, $right) {
return ($left > $right - 0.0001) && ($left < $right + 0.0001);
}
function ≡($left, $right) {
return $left === $right;
}
function ≢($left, $right) {
return $left !== $right;
}
$a = 1;
$b = 2 - 1;
echo ≡($a, $b).\"\\n\";
echo ≅($a, $a + 0.000001).\"\\n\";
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
And the code below:
{% set code_to_highlight %}
$fileHandle = fopen("lolwut.php", 'w');
echo "\xE2\x80\x8B";
fwrite($fileHandle, "<?php\n");
fwrite($fileHandle, "\n");
fwrite($fileHandle, " $\xE2\x80\x8B = 'magix';\n");
fwrite($fileHandle, " echo $\xE2\x80\x8B;\n");
fwrite($fileHandle, "\n");
fwrite($fileHandle, " function fo\xE2\x80\x8Bo(){\n");
fwrite($fileHandle, " echo 'bar';\n");
fwrite($fileHandle, " }\n");
fwrite($fileHandle, "\n");
fwrite($fileHandle, " foo();\n");
fwrite($fileHandle, "?>");
fclose($fileHandle);
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
Generates some code that looks like this:
{% set code_to_highlight %}
<?php
$ = 'magix';
echo $;
function foo() {
echo 'bar';
}
foo();
?>
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
Which outputs the following when run
<pre>magix
Fatal error: Call to undefined function foo() </pre>
*Obviously*.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydCreating images with transparency in PHP
/blog/6/Creating_images_with_transparency_in_PHP
Because I keep forgetting and having to remind myself every couple of years, here is to properly setup an image for anti-aliasing in PHP using the GD library. You need to create an image and give it a proper background colour, even if that background colour is transparent. This makes the image library use a proper alpha-channel (which is the only sensible way of doing alpha blending) rather than using an indexed based alpha, where only one 'colour' is transparent and all the others are fully opaque.The code below produces an image like this:
<!-- end_preview -->
{{ articleImage('phpGD_imageTest.png', 'original', 'none') }}
The image has a background colour on the left-hand side of the picture, and the font it anti-aliased against that correctly. The font is also anti-aliased against the transparent background of the right-hand side of the picture, and so blends with the background of the page.
{% set code_to_highlight %}
$font = '../../fonts/Arial.ttf';
$text = 'The Quick Brown Fox Jumps over the Lazy Dog';
// Create the image
function imageCreateTransparent($x, $y) {
$imageOut = imagecreatetruecolor($x, $y);
$backgroundColor = imagecolorallocatealpha($imageOut, 0, 0, 0, 127);
imagefill($imageOut, 0, 0, $backgroundColor);
return $imageOut;
}
$image = imageCreateTransparent(600, 100);
// Create some colors
$white = imagecolorallocate($image, 255, 255, 255);
$fontColour = imagecolorallocate($image, 0xff, 0x2f, 0x2f);
// Draw the white box
imagefilledrectangle($image, 0, 0, 399, 29, $white);
// Add the text over the top
imagettftext($image, 20, 0, 10, 20, $fontColour, $font, $text);
imagesavealpha($image, true);
header("Content-Type: image/png");
imagepng($image);
{% endset %}
{{ syntaxHighlighter(code_to_highlight, 'php') }}
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydAll the PHP frameworks
/blog/5/All_the_PHP_frameworks
There are too many frameworks. Here is a quick reference list:
<!-- end_preview -->
Cake
====
* CakePHP makes building web applications simpler, faster and require less code.
* [http://cakephp.org/](http://cakephp.org/)
CodeIgniter
===========
* CodeIgniter is a proven, agile & open PHP web application framework with a small footprint. It is powering the next generation of web apps.
* [http://ellislab.com/codeigniter](http://ellislab.com/codeigniter)
Flow/Typo3
=====
* TYPO3 Flow is a web application platform enabling developers creating excell ent web solutions and bring back the joy of coding.*Being re-written.[http://flow.typo3.org/](http://flow.typo3.org/)
FuelPHP
=======
* FuelPHP is a simple, flexible, community driven PHP 5.3+ framework, based on the best ideas of other frameworks, with a fresh start!
* [http://fuelphp.com/](http://fuelphp.com/)
Kohana
======
* An elegant HMVC PHP5 framework that provides a rich set of components for building web applications.*[http://kohanaframework.org/](http://kohanaframework.org/)
Laravel
=======
* The PHP framework for web artisans.
* Not really DI, has plugins instead. You don\'t import it as a dependency, you \'composer --create-app\'[http://laravel.com/](http://laravel.com/)
Nette
=====
* A popular tool for PHP web development. It is designed to be the most usable as possible and is definitely one of the safest one. It speaks your language and helps you to easily build better websites.*[http://nette.org/](http://nette.org/)
Peej Tonic
===========
* Tonic is an open source less is more, RESTful Web application development PHP library designed to do things "the right way", where resources are king and the library gets out of the way and leaves the developer to get on with it.
* http://peej.github.io/tonic/
Phalcon
=======
* Phalcon is a web framework implemented as a C extension offering high performance and lower resource consumption.
* http://phalconphp.com/
Pines Framework
===============
Looks like it\'s focused on applications not just websites.[http://pinesframework.org/](http://pinesframework.org/)
Qcodo
=====
_Code Less. Do More._ Antique framework, looks like it was pushed heavily years ago, now abandoned.[http://www.qcodo.com/](http://www.qcodo.com/)
Silex
=====
It's Symfony without Symfony's structure[http://silex.sensiolabs.org/](http://silex.sensiolabs.org/)
Silverstripe
=============
*Easy to use CMS for website editors
*More a CMS than a web framework.[http://www.silverstripe.org/](http://www.silverstripe.org/)
Slim Framework
==============
* Slim is a PHP micro framework that helps you quickly write simple yet powerful web applications and APIs.
* [http://www.slimframework.com/](http://www.slimframework.com/)
Symfony
=======
Symfony: High Performance PHP Framework for Web Development[http://symfony.com/](http://symfony.com/)
Yaf
===
*PHP Framework in PHP extension*[http://pecl.php.net/package/yaf](http://pecl.php.net/package/yaf)[http://www.yafdev.com/](http://www.yafdev.com/)Yii===Uses ActiveRecord style ORM in controllers.Hasn\'t discovered namespaces yet. <br/>Netscape style error detected \"Yii 2 will be full rebuilt on top of PHP 5.3.0+ and is aimed to become a state-of-the-art of the new generation of PHP framework. Yii 2.0 will not be compatible with 1.1. \"[http://www.yiiframework.com/](http://www.yiiframework.com/)
Zend
====
Me gusta - though a bit complicated and has the magic strings everywhere.[http://www.zend.com/](http://www.zend.com/en/)
Zentao
======
Looks like a very popular Chinese framework.http://www.zentao.net/en/[https://github.com/easysoft/zentaophp](https://github.com/easysoft/zentaophp)
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydNaming convention for Composer
/blog/4/Naming_convention_for_Composer
Just so that I can reference it in future, here is a Composer naming convention for libraries I publish.
<!-- end_preview -->
Intahwebz/*
===========
My own projects that are meant to be reusable by other people. e.g. PHPToJavascript
Danack/*
========
Forks of other people software, e.g. My branch of Guzzle that hopefully will be merged back into main soon.
BaseReality
===========
My own projects that aren\'t meant to be used by other people. e.g. this website.
text/html1970-01-01T00:00:00+00:00http://blog.basereality.com/rssDan AckroydBlog moved
/blog/3/Blog_moved
After realising that I need to be able to post code samples on my blog, and looking at how annoying that would be to do under blogger, I've manned up and moved my blog back to my own site.
<!-- end_preview -->
The few [previous blog articles I wrote are here](http://basereality.blogspot.com), until of course Google decides to shut down blogger as not being cost effective.